As ransomware families such as Akira and Ryuk started to knock organizations offline around the globe, many defenders reacted the same way: harden perimeters, push automated playbooks, and quarantine suspect endpoints. Romanus Prabhu Raymond, Director of Technology at ManageEngine, cautioned that reflexive containment can create worse outcomes. When a hospital workstation or a bank teller terminal is swept offline by an automatic rule, patient care and critical services can suffer as much as the malware would have damaged them.
Clients asked for tougher containment tools, Raymond said, yet that demand exposed a core tension between fast technical response and the reality of operational risk. That dilemma — how to act quickly against threats without causing collateral harm — is one reason ethical cybersecurity has moved to the center of debate in 2025.
Raymond spoke about the problem shortly before his presentation at the Cyber Security Expo in Amsterdam, explaining how some leading teams are rejecting a binary security-versus-privacy trade-off and instead trying to build trust into the way they secure systems. “Ethical cybersecurity goes beyond defending systems and data – it's about applying security practices responsibly to protect organizations, individuals, and society at large,” Romanus Prabhu Raymond said ahead of his presentation.
The industry is operating under new pressures. High-profile breaches have pushed regulators and boards to demand clearer accountability, while the rapid integration of artificial intelligence into defensive tooling raises fresh questions around transparency, fairness, and who takes responsibility for consequential actions. Security has shifted from a marketable differentiator to a baseline expectation; what separates one vendor or operator from another is how they treat data and make decisions about risk.
Raymond uses a simple neighborhood metaphor to make the point: install cameras to protect public spaces, but avoid peering through residents' windows. Security controls should protect shared infrastructure without intruding on private activity. ManageEngine has tried to bake that perspective into product development through an “ethical by design” approach that puts fairness, transparency, and accountability into every release cycle. The firm says it does not monetize or actively monitor customer data, arguing that data belongs to the customer and should be treated as such.
That stance shows up in engineering and operational practice. Raymond described a “trust by design” philosophy that requires responsibility and auditability at each stage of development, from threat modeling and design reviews to deployment and support. New features intended for endpoints are verified against industry standards and security requirements before they reach customers, and product defaults favor the least invasive option that still achieves the security goal.
The approach extends across ManageEngine’s global footprint. The company runs data centers in multiple jurisdictions that align with local privacy and regulatory obligations, and staff training covers not only technical procedures but also the appropriate handling of customer information. ManageEngine’s trans-localization strategy assigns regional teams to serve local customers, a configuration the company says helps with legal compliance, cultural fit, and operational responsiveness.
Artificial intelligence complicates these choices. As AI moves from assistive analytics to decision-making roles, questions about who is accountable for automated actions grow sharper. Raymond outlined ManageEngine’s “SHE AI principles”: Secure AI, Human AI, and Ethical AI. Secure AI focuses on protections against manipulation and adversarial input. Human AI keeps people in the loop for significant remediation steps; for example, if an AI flags a critical endpoint as malicious, it routes the finding to a human analyst instead of immediately ejecting the device from a network — a safeguard that can prevent outages in sensitive environments such as hospitals and financial services.
The Ethical AI pillar stresses explainability. ManageEngine designs alerts so operators and auditors understand the rationale. An illustrative alert in the company’s documentation reads: “The endpoint cannot log in at this time and is trying to connect to too many network devices.” By offering a plain-language cause for action, the system helps compliance teams evaluate a response and supports human review.
That balance between monitoring to detect threats early and avoiding a workplace culture of constant surveillance is one of the trickiest operational trade-offs. Raymond acknowledged that proactive observation is necessary to spot subtle intrusions, yet excessive collection or unfettered visibility into employees’ behavior risks turning security programs into suspicion engines. ManageEngine follows principles of data minimization and purpose-limited monitoring: collect only what is required to defend systems, assign clear security use cases to each data element, analyze patterns using anonymized datasets when possible, and put governance controls on who can see what information and for how long.
Those governance controls include role-based access rules, retention periods aligned with legal and regulatory obligations, and logging that demonstrates who accessed data and why. Raymond argued that vendors must act as custodians of digital ethics and earn trust instead of assuming it is automatic. ManageEngine participates in industry discussions, publishes guidance, and embeds compliance standards such as ISO 27000 and GDPR-aligned controls into product lifecycles.
Looking ahead, Raymond singled out two major ethical challenges. The first is autonomous security driven by AI: as security operations centers add automation that can take decisive steps, explainability and accountability become nonnegotiable. The second is the arrival of quantum computing. Quantum machines could render traditional public-key crypto insecure, undermining the foundations of secure communications and forcing a wholesale reassessment of encryption strategy. Other emerging technologies, such as biometric systems, introduce privacy risks if designers do not build in limits on storage, reuse, and cross-matching.
For organizations that want to embed ethical thinking into cybersecurity, Raymond offered three practical moves. Boards should adopt a cybersecurity ethics charter that defines acceptable risk tolerances and decision rights. Procurement and architecture teams should make privacy and ethical impact part of vendor selection and technical design. Security programs must operationalize ethics through ongoing training, clear controls, and playbooks that explain not just what to do but why certain safeguards exist.
Those measures form a framework in which security and privacy are treated as complementary rather than competing priorities. Organizations that treat ethical practice as the foundation for trusted technology are better placed to innovate responsibly, preserve human oversight over critical actions, and maintain the public confidence that modern systems require.

