Project Management Data Security

Explore top LinkedIn content from expert professionals.

  • View profile for Luiza Jarovsky, PhD
    Luiza Jarovsky, PhD Luiza Jarovsky, PhD is an Influencer

    Co-founder of the AI, Tech & Privacy Academy (1,400+ participants), Author of Luiza’s Newsletter (92,000+ subscribers), Mother of 3

    128,114 followers

    🚨 AI Privacy Risks & Mitigations Large Language Models (LLMs), by Isabel Barberá, is the 107-page report about AI & Privacy you were waiting for! [Bookmark & share below]. Topics covered: - Background "This section introduces Large Language Models, how they work, and their common applications. It also discusses performance evaluation measures, helping readers understand the foundational aspects of LLM systems." - Data Flow and Associated Privacy Risks in LLM Systems "Here, we explore how privacy risks emerge across different LLM service models, emphasizing the importance of understanding data flows throughout the AI lifecycle. This section also identifies risks and mitigations and examines roles and responsibilities under the AI Act and the GDPR." - Data Protection and Privacy Risk Assessment: Risk Identification "This section outlines criteria for identifying risks and provides examples of privacy risks specific to LLM systems. Developers and users can use this section as a starting point for identifying risks in their own systems." - Data Protection and Privacy Risk Assessment: Risk Estimation & Evaluation "Guidance on how to analyse, classify and assess privacy risks is provided here, with criteria for evaluating both the probability and severity of risks. This section explains how to derive a final risk evaluation to prioritize mitigation efforts effectively." - Data Protection and Privacy Risk Control "This section details risk treatment strategies, offering practical mitigation measures for common privacy risks in LLM systems. It also discusses residual risk acceptance and the iterative nature of risk management in AI systems." - Residual Risk Evaluation "Evaluating residual risks after mitigation is essential to ensure risks fall within acceptable thresholds and do not require further action. This section outlines how residual risks are evaluated to determine whether additional mitigation is needed or if the model or LLM system is ready for deployment." - Review & Monitor "This section covers the importance of reviewing risk management activities and maintaining a risk register. It also highlights the importance of continuous monitoring to detect emerging risks, assess real-world impact, and refine mitigation strategies." - Examples of LLM Systems’ Risk Assessments "Three detailed use cases are provided to demonstrate the application of the risk management framework in real-world scenarios. These examples illustrate how risks can be identified, assessed, and mitigated across various contexts." - Reference to Tools, Methodologies, Benchmarks, and Guidance "The final section compiles tools, evaluation metrics, benchmarks, methodologies, and standards to support developers and users in managing risks and evaluating the performance of LLM systems." 👉 Download it below. 👉 NEVER MISS my AI governance updates: join my newsletter's 58,500+ subscribers (below). #AI #AIGovernance #Privacy #DataProtection #AIRegulation #EDPB

  • View profile for Armand Ruiz
    Armand Ruiz Armand Ruiz is an Influencer

    building AI systems

    205,725 followers

    How To Handle Sensitive Information in your next AI Project It's crucial to handle sensitive user information with care. Whether it's personal data, financial details, or health information, understanding how to protect and manage it is essential to maintain trust and comply with privacy regulations. Here are 5 best practices to follow: 1. Identify and Classify Sensitive Data Start by identifying the types of sensitive data your application handles, such as personally identifiable information (PII), sensitive personal information (SPI), and confidential data. Understand the specific legal requirements and privacy regulations that apply, such as GDPR or the California Consumer Privacy Act. 2. Minimize Data Exposure Only share the necessary information with AI endpoints. For PII, such as names, addresses, or social security numbers, consider redacting this information before making API calls, especially if the data could be linked to sensitive applications, like healthcare or financial services. 3. Avoid Sharing Highly Sensitive Information Never pass sensitive personal information, such as credit card numbers, passwords, or bank account details, through AI endpoints. Instead, use secure, dedicated channels for handling and processing such data to avoid unintended exposure or misuse. 4. Implement Data Anonymization When dealing with confidential information, like health conditions or legal matters, ensure that the data cannot be traced back to an individual. Anonymize the data before using it with AI services to maintain user privacy and comply with legal standards. 5. Regularly Review and Update Privacy Practices Data privacy is a dynamic field with evolving laws and best practices. To ensure continued compliance and protection of user data, regularly review your data handling processes, stay updated on relevant regulations, and adjust your practices as needed. Remember, safeguarding sensitive information is not just about compliance — it's about earning and keeping the trust of your users.

  • View profile for Pooja Jain

    Storyteller | Lead Data Engineer@Wavicle| Linkedin Top Voice 2025,2024 | Linkedin Learning Instructor | 2xGCP & AWS Certified | LICAP’2022

    192,832 followers

    Do you think Data Governance: All Show, No Impact? → Polished policies ✓ → Fancy dashboards ✓ → Impressive jargon ✓ But here's the reality check: Most data governance initiatives look great in boardroom presentations yet fail to move the needle where it matters. The numbers don't lie. Poor data quality bleeds organizations dry—$12.9 million annually according to Gartner. Yet those who get governance right see 30% higher ROI by 2026. What's the difference? ❌It's not about the theater of governance. ✅It's about data engineers who embed governance principles directly into solution architectures, making data quality and compliance invisible infrastructure rather than visible overhead. Here’s a 6-step roadmap to build a resilient, secure, and transparent data foundation: 1️⃣ 𝗘𝘀𝘁𝗮𝗯𝗹𝗶𝘀𝗵 𝗥𝗼𝗹𝗲𝘀 & 𝗣𝗼𝗹𝗶𝗰𝗶𝗲𝘀 Define clear ownership, stewardship, and documentation standards. This sets the tone for accountability and consistency across teams. 2️⃣ 𝗔𝗰𝗰𝗲𝘀𝘀 𝗖𝗼𝗻𝘁𝗿𝗼𝗹 & 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 Implement role-based access, encryption, and audit trails. Stay compliant with GDPR/CCPA and protect sensitive data from misuse. 3️⃣ 𝗗𝗮𝘁𝗮 𝗜𝗻𝘃𝗲𝗻𝘁𝗼𝗿𝘆 & 𝗖𝗹𝗮𝘀𝘀𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 Catalog all data assets. Tag them by sensitivity, usage, and business domain. Visibility is the first step to control. 4️⃣ 𝗠𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴 & 𝗗𝗮𝘁𝗮 𝗤𝘂𝗮𝗹𝗶𝘁𝘆 𝗙𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸 Set up automated checks for freshness, completeness, and accuracy. Use tools like dbt tests, Great Expectations, and Monte Carlo to catch issues early. 5️⃣ 𝗟𝗶𝗻𝗲𝗮𝗴𝗲 & 𝗜𝗺𝗽𝗮𝗰𝘁 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀 Track data flow from source to dashboard. When something breaks, know what’s affected and who needs to be informed. 6️⃣ 𝗦𝗟𝗔 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 & 𝗥𝗲𝗽𝗼𝗿𝘁𝗶𝗻𝗴 Define SLAs for critical pipelines. Build dashboards that report uptime, latency, and failure rates—because business cares about reliability, not tech jargon. With the rising AI innovations, it's important to emphasise the governance aspects data engineers need to implement for robust data management. Do not underestimate the power of Data Quality and Validation by adapting: ↳ Automated data quality checks ↳ Schema validation frameworks ↳ Data lineage tracking ↳ Data quality SLAs ↳ Monitoring & alerting setup While it's equally important to consider the following Data Security & Privacy aspects: ↳ Threat Modeling ↳ Encryption Strategies ↳ Access Control ↳ Privacy by Design ↳ Compliance Expertise Some incredible folks to follow in this area - Chad Sanderson George Firican 🎯 Mark Freeman II Piotr Czarnas Dylan Anderson Who else would you like to add? ▶️ Stay tuned with me (Pooja) for more on Data Engineering. ♻️ Reshare if this resonates with you!

  • View profile for Shiv Kataria

    Senior Key Expert R&D @ Siemens | Risk Governance | Incident Response | Cybersecurity, Operational Technology

    23,258 followers

    Industrial Cyber Security—Layer by Layer OT environments can't rely on repackaged IT security checklists. Frameworks like IEC 62443 and NIST SP 800-82 demand a defence-in-depth strategy tailored to physical processes, real-time constraints, and integrated safety systems. This layered defence model visualizes the approach, moving from the physical perimeter to the core data: ✏️ Perimeter Security: Starts with physical controls like site fencing and progresses to network gateways that enforce one-way data flow. ✏️ Network Security: Involves segmenting the network (per the Purdue model), using industrial firewalls, and securing all remote access points. ✏️ Endpoint Security: Focuses on locking down devices with application whitelisting, ensuring secure boot processes, and using anomaly detection to spot unusual behavior. ✏️ Application Security: Secures the software layer through code-signing for logic downloads and hardening engineering workstations. ✏️ Data Security: Protects information itself with encrypted backups, PKI certificates for authenticity, and integrity monitoring. This entire strategy rests on two pillars: 1. Prevention: Proactive measures like architecture reviews, role-based access control (RBAC), and disciplined patch management. 2. Monitoring & Response: OT-aware security operations, practiced incident response playbooks, and the ability to perform forensics on industrial controllers. Why it matters: The data is clear. Over 80% of recent OT incidents exploited weak segmentation or unmanaged assets. Conversely, plants with layered controls have cut their mean-time-to-detect threats by 60% (Dragos 2024). Which of these security rings do you see most neglected in real-world plants? #OTSecurity #IEC62443 #NIST80082 #DefenseInDepth #IndustrialCyber #CriticalInfrastructure #CyberResilience

  • View profile for Taimur Ijlal

    ☁️ Cloud & AI Security Leader | Senior Security Consultant @ AWS | Teaching 70K+ Professionals How to Secure Cloud & Agentic AI | Best-Selling Author | YouTube: Cloud Security Guy

    25,510 followers

    🎉 How to Make Cybersecurity Awareness NOT Boring Cybersecurity awareness training can often be a snooze fest. 😴 Here are a few ways to make it engaging 🎮 1. Gamify the Training Who doesn't love a good game? Turn your cybersecurity training into a game or competition. Award points for correct answers and offer small prizes for winners. Trust me, people will pay attention. 🎥 2. Use Real-World Examples Skip the jargon and go straight to real-world examples that people can relate to. Show them news clips of high-profile cyber attacks and explain how basic awareness could have prevented them. 📱 3. Make It Interactive Interactive modules can make a world of difference. Use quizzes, flashcards, and even augmented reality apps to make the training hands-on. 🎭 4. Role-Playing Exercises Let your team act out different scenarios where they have to identify phishing emails or secure compromised accounts. It's a fun and effective way to test their knowledge. 🎤 5. Guest Speakers Invite cybersecurity experts to share their experiences and insights. A fresh perspective can make the training more engaging and offer valuable real-world advice. 📊 6. Track and Celebrate Progress Use metrics to track participation and performance. Celebrate the wins, no matter how small, to keep everyone motivated. Remember, the goal is not just to "get through" the training but to create a culture of continuous cybersecurity awareness. Have you tried any innovative methods to make cybersecurity training more engaging? Share your experiences in the comments below! 👇 #Cybersecurity #CyberAwareness #Training #Engagement #Innovation

  • View profile for Dr. Antonio J. Jara

    Expert in IoT | Physical AI | Data Spaces | Urban Digital Twin | Cybersecurity | Smart Cities | Certified AI Auditor by ISACA (AAIA / CISA / CISM)

    33,360 followers

    🚀 𝐍𝐞𝐰 𝐏𝐮𝐛𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧! 𝐈𝐧𝐭𝐞𝐠𝐫𝐚𝐭𝐢𝐧𝐠 𝐭𝐡𝐞 𝐂𝐑𝐀 𝐢𝐧𝐭𝐨 𝐭𝐡𝐞 𝐈𝐨𝐓 𝐋𝐢𝐟𝐞𝐜𝐲𝐜𝐥𝐞: 𝐂𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞𝐬, 𝐒𝐭𝐫𝐚𝐭𝐞𝐠𝐢𝐞𝐬, 𝐚𝐧𝐝 𝐁𝐞𝐬𝐭 𝐏𝐫𝐚𝐜𝐭𝐢𝐜𝐞𝐬 Proud to share our newest peer-reviewed article in Information (MDPI), co-authored with Miguel Ángel Ortega Velázquez, Iris Cuevas Martinez, and Dr. Antonio J. Jara (myself as ISACA CISM/CISA/AAIA). 𝘛𝘩𝘪𝘴 𝘸𝘰𝘳𝘬 𝘢𝘳𝘳𝘪𝘷𝘦𝘴 𝘢𝘵 𝘢 𝘤𝘳𝘶𝘤𝘪𝘢𝘭 𝘮𝘰𝘮𝘦𝘯𝘵, 𝘢𝘴 𝘵𝘩𝘦 𝘌𝘜 𝘊𝘺𝘣𝘦𝘳 𝘙𝘦𝘴𝘪𝘭𝘪𝘦𝘯𝘤𝘦 𝘈𝘤𝘵 (𝘊𝘙𝘈) 𝘣𝘦𝘤𝘰𝘮𝘦𝘴 𝘵𝘩𝘦 𝘮𝘰𝘴𝘵 𝘪𝘮𝘱𝘢𝘤𝘵𝘧𝘶𝘭 𝘳𝘦𝘨𝘶𝘭𝘢𝘵𝘪𝘰𝘯 𝘧𝘰𝘳 𝘐𝘰𝘛 𝘮𝘢𝘯𝘶𝘧𝘢𝘤𝘵𝘶𝘳𝘦𝘳𝘴 𝘪𝘯 𝘵𝘩𝘦 𝘤𝘰𝘮𝘪𝘯𝘨 𝘺𝘦𝘢𝘳𝘴. 🔥 𝐓𝐨𝐩 𝐓𝐚𝐤𝐞𝐚𝐰𝐚𝐲𝐬 1️⃣ 𝐀 𝐜𝐨𝐦𝐩𝐥𝐞𝐭𝐞 𝐦𝐞𝐭𝐡𝐨𝐝𝐨𝐥𝐨𝐠𝐲 𝐭𝐨 𝐜𝐨𝐧𝐯𝐞𝐫𝐭 𝐥𝐞𝐠𝐚𝐥 𝐂𝐑𝐀 𝐭𝐞𝐱𝐭 𝐢𝐧𝐭𝐨 𝐞𝐧𝐠𝐢𝐧𝐞𝐞𝐫𝐢𝐧𝐠 𝐫𝐞𝐚𝐥𝐢𝐭𝐲: We introduce a two-phase framework: • Phase 1: Systematically transform CRA Articles 13–14 and Annexes into atomic, testable engineering requirements. • Phase 2: Apply Analytic Hierarchy Process (AHP) quantitative scoring to produce a defensible readiness metric. 2️⃣ 𝐀 𝐟𝐮𝐥𝐥 𝐥𝐢𝐟𝐞𝐜𝐲𝐜𝐥𝐞-𝐛𝐚𝐬𝐞𝐝 𝐂𝐑𝐀 𝐜𝐡𝐞𝐜𝐤𝐥𝐢𝐬𝐭 𝐟𝐨𝐫 𝐈𝐨𝐓 𝐩𝐫𝐨𝐝𝐮𝐜𝐭𝐬: From secure design to post-market obligations, the paper provides an actionable DevSecOps-aligned checklist. 3️⃣ 𝐀 𝐝𝐞𝐟𝐞𝐧𝐬𝐢𝐛𝐥𝐞 𝐫𝐢𝐬𝐤-𝐛𝐚𝐬𝐞𝐝 𝐰𝐞𝐢𝐠𝐡𝐭𝐢𝐧𝐠 𝐦𝐨𝐝𝐞𝐥 𝐮𝐬𝐢𝐧𝐠 𝐭𝐡𝐞 𝐀𝐧𝐚𝐥𝐲𝐭𝐢𝐜 𝐇𝐢𝐞𝐫𝐚𝐫𝐜𝐡𝐲 𝐏𝐫𝐨𝐜𝐞𝐬𝐬 (𝐀𝐇𝐏): We derive consistent domain weights, ensuring mathematically validated prioritization of CRA domains. 4️⃣ 𝐑𝐞𝐚𝐥-𝐰𝐨𝐫𝐥𝐝 𝐯𝐚𝐥𝐢𝐝𝐚𝐭𝐢𝐨𝐧 through the TRUEDATA project funded by INCIBE - Instituto Nacional de Ciberseguridad: We applied the full model to a large industrial OT cybersecurity project (water infrastructure) with Neoradix Solutions AirTrace Bersey UCAM Universidad Católica San Antonio de Murcia at the pilots with the support of the Confederación Hidrográfica del Segura, O.A., Mancomunidad De Los Canales De Taibilla, and FRANCISCO ARAGÓN. 5️⃣ 𝐂𝐥𝐞𝐚𝐫 𝐨𝐩𝐞𝐫𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐠𝐮𝐢𝐝𝐚𝐧𝐜𝐞. The paper provides best practices for SBOM automation, PSIRT & CVD setup, Secure-by-design, OTA, monitoring, attestation, documentation and conformity assessment Our aim from Libelium with this paper is to give the industry a practical, structured, and evidence-based way to operationalize compliance and strengthen cybersecurity by design. 𝐓𝐑𝐔𝐄𝐃𝐀𝐓𝐀 𝐝𝐞𝐦𝐨𝐧𝐬𝐭𝐫𝐚𝐭𝐞𝐬 𝐡𝐨𝐰 𝐭𝐡𝐞 𝐦𝐞𝐭𝐡𝐨𝐝𝐨𝐥𝐨𝐠𝐲 𝐚𝐩𝐩𝐥𝐢𝐞𝐬 𝐭𝐨 𝐡𝐢𝐠𝐡-𝐬𝐭𝐚𝐤𝐞𝐬 𝐢𝐧𝐝𝐮𝐬𝐭𝐫𝐢𝐚𝐥 𝐬𝐲𝐬𝐭𝐞𝐦𝐬. 𝐓𝐡𝐞 𝐂𝐑𝐀 𝐢𝐬 𝐧𝐨𝐭 “𝐣𝐮𝐬𝐭 𝐚𝐧𝐨𝐭𝐡𝐞𝐫 𝐫𝐞𝐠𝐮𝐥𝐚𝐭𝐢𝐨𝐧”, 𝐢𝐭 𝐢𝐬 𝐭𝐡𝐞 𝐧𝐞𝐰 𝐛𝐚𝐬𝐞𝐥𝐢𝐧𝐞 𝐟𝐨𝐫 𝐈𝐨𝐓 𝐭𝐫𝐮𝐬𝐭 𝐢𝐧 𝐄𝐮𝐫𝐨𝐩𝐞. 👉 Download here: https://lnkd.in/dQu54qE2 European Union Agency for Cybersecurity (ENISA) Felix A. Barrio (PhD, CISM) Global Cybersecurity Forum SITE سايت Betania Allo Axon Partners Group ISACA ISACA VALENCIA

  • View profile for Rock Lambros
    Rock Lambros Rock Lambros is an Influencer

    Securing Agentic AI @ Zenity | Cybersecurity | CxO, Startup, PE & VC Advisor | Executive & Board Member | CISO | CAIO | QTE | AIGP | Author | OWASP AI Exchange, GenAI & Agentic AI | Tiki Tribe Founding Member

    20,485 followers

    You can’t hack your way to trust. And you can’t innovate in chaos. This post is a follow-up to yesterday's article because organizations must understand that you can't talk about one of the nodes in the triad without talking about the other two. Push one too hard, and the whole system grinds to a halt. But when they’re aligned? That’s when the magic really happens. 𝗔𝗜 𝗳𝘂𝗲𝗹𝘀 𝘀𝗺𝗮𝗿𝘁𝗲𝗿 𝘀𝘁𝗿𝗮𝘁𝗲𝗴𝗶𝗲𝘀—𝗯𝘂𝘁 𝗶𝘁’𝘀 𝗼𝗻𝗹𝘆 𝗮𝘀 𝗴𝗼𝗼𝗱 𝗮𝘀 𝘁𝗵𝗲 𝗱𝗮𝘁𝗮 𝗶𝘁’𝘀 𝗳𝗲𝗱. AI thrives on clean, accessible data, but your cybersecurity and data governance aren’t airtight, you’re feeding your AI poisoned inputs—or worse, leaking critical outputs. Data poisoning or model inference attacks FTW. 𝗖𝘆𝗯𝗲𝗿𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗶𝘀𝗻’𝘁 𝗮 𝗯𝗮𝗿𝗿𝗶𝗲𝗿—𝗶𝘁’𝘀 𝗮𝗻 𝗲𝗻𝗮𝗯𝗹𝗲𝗿. Too many people treat cybersecurity as the brakes on innovation. But think of it as the seatbelt on your AI-powered sports car. You wouldn’t drive at 200 mph without protection, right? Strong security frameworks aren’t just about protecting data; they’re about enabling trust—the foundation of any digital business. 𝗕𝘂𝘀𝗶𝗻𝗲𝘀𝘀 𝗲𝗻𝗮𝗯𝗹𝗲𝗺𝗲𝗻𝘁 𝗶𝘀 𝘁𝗵𝗲 𝗴𝗹𝘂𝗲. All the AI innovation and cybersecurity in the world means nothing if it doesn’t deliver measurable business results. Enablement is where the rubber meets the road—turning insights into outcomes, trust into transactions, and resilience into revenue. The challenge? These gears don’t always mesh smoothly. 𝗛𝗲𝗿𝗲’𝘀 𝗵𝗼𝘄 𝘁𝗼 𝗴𝗲𝘁 𝘁𝗵𝗲𝗺 𝘀𝗽𝗶𝗻𝗻𝗶𝗻𝗴 𝗶𝗻 𝘀𝘆𝗻𝗰: 1. Start with strategy: Define clear business outcomes and reverse-engineer the role of AI and cybersecurity. 2. Break the silos: Your AI and cybersecurity teams can’t operate in isolation. Collaboration isn’t optional; it’s essential. 3. Measure what matters: Align your KPIs across these three domains. You can’t manage what you don’t measure. When done right, this alignment creates a feedback loop: AI insights strengthen business enablement, cybersecurity safeguards them, and the results fuel more innovation. That’s the flywheel. Are your AI, cybersecurity, and business enablement efforts stuck in silos—or are they part of a single, unified strategy? Let’s discuss. #AIstrategy #Cybersecurity #BusinessEnablement #DigitalTransformation

  • View profile for Dunith Danushka

    Technical Product Marketing at EDB | Author of “Practical Data Engineering with Apache Projects”

    6,762 followers

    🚀 Delta Sharing: The Open Protocol for Secure Data Exchange Traditionally, data sharing involved providing static CSV/Parquet file dumps based on ad-hoc requests, requiring data engineers to create extracts or build complex ETL pipelines. By the time data reached recipients, it was often outdated. Additionally, moving data across organizational boundaries increased security risks and required manual auditing as well. Delta Sharing, an open protocol, solves these challenges by enabling direct, real-time data exchange while ensuring security and governance. 🔍 What is Delta Sharing? Delta Sharing is an open-source protocol that allows data providers to securely share live data from their data lake or lakehouse with any recipient, regardless of the computing platform they use. It is designed to work with Delta Lake, but it also supports other formats like Apache Parquet. 🔧 What Problems Does Delta Sharing Solve? ✅ Eliminates Data Copies – Consumers can query shared data without duplicating or exporting it into another system. ✅ Interoperability – Enables cross-platform sharing across different cloud and analytics services, including Databricks, Apache Spark, Pandas, and others. ✅ Real-time & Secure Access – Uses fine-grained access control to ensure only authorized users can access the latest version of shared data. ✅ Simplified Data Collaboration – Reduces the need for custom APIs, FTP transfers, or complex ETL workflows when sharing data with external partners. 🛠 Key Components in a Delta Sharing Scenario - Provider (Data Owner) – The entity sharing the data. - Delta Sharing Server – Handles authentication and access control. - Recipient (Data Consumer) – The entity accessing the shared data, which can be a data warehouse, a machine learning model, or a BI tool. - Storage Backend – Typically an object store (AWS S3, Azure Blob, Google Cloud Storage, MinIO) where the data resides. 📌 Common Use Cases for Delta Sharing 💡 Inter-company Data Exchange – Share supply chain, financial, or operational data with partners securely. 📊 Federated Analytics – Analysts can query live shared datasets without moving them into their own data warehouse. 🤖 Machine Learning & AI – Data scientists can directly access fresh, live data for model training without worrying about outdated extracts. ⚡ Data Monetization – Organizations can offer secure access to valuable datasets as a service without needing data pipelines. Delta Sharing + Unity Catalog Delta Sharing and Unity Catalog work together to enable secure, scalable, and governed data sharing across organizations. While Delta Sharing provides the protocol for sharing live data with external consumers, Unity Catalog acts as the central governance layer, ensuring fine-grained access control, auditing, and security compliance. I will write about this integration in the future. #deltasharing #datagovernance #datasharing

  • View profile for Pan Wu
    Pan Wu Pan Wu is an Influencer

    Senior Data Science Manager at Meta

    51,176 followers

    Personal data is highly sensitive information we entrust to internet companies, and strong regulations require these companies to handle it safely and reliably to meet security, privacy, and compliance standards. In this tech blog, Airbnb’s data science team shares how they built a data classification workflow to establish a unified strategy for identifying and classifying data across all data stores. The workflow is built on three pillars: Catalog, Detection, and Reconciliation. The Catalog pillar focuses on creating a dynamic and accurate system to identify where data resides and organize it into a comprehensive inventory. Detection addresses the question: what data might be considered personal? This step involves a detection engine structured as a pipeline to scan, validate, and control thresholds for surfacing detected results. Finally, Reconciliation ensures accurate classification by involving data owners in a human-in-the-loop process to confirm or refine detected classifications. Given the complexity of the system, the team developed metrics to assess its quality. These metrics—recall, precision, and speed—evaluate how effectively, accurately, and efficiently the classification system operates, ensuring it safeguards personal data over the long term. Additionally, the team shares strategies for governing data classification early in the process, along with best practices for improving workflows. These insights provide a clear understanding of not only the metrics but also actionable ways to enhance classification systems. Highly recommended reading for anyone interested in data governance and security. #datascience #personal #data #governance #classification #metrics – – –  Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts:    -- Spotify: https://lnkd.in/gKgaMvbh   -- Apple Podcast: https://lnkd.in/gj6aPBBY    -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/gqxuQ29E

  • View profile for Hemang Doshi

    Next100 CIO Awardee, IT Leadership, Building Resilient Global Infrastructures, Cyber Security, Audit Compliance, Cloud, Digital Transformation, Technology AI Evangelist, Strategic Planning, P&L Owner

    9,260 followers

    Why Identity Access Management Is Critical for Modern Enterprises Identity Access Management (IAM) is the vital part of any robust security architecture - especially as traditional perimeters dissolve in today’s distributed environments. For technical leaders and practitioners, effective IAM isn’t just about authentication. It’s about implementing continuous, granular controls that adapt to organizational change and emerging risk. Key pillars include: User Access Reconciliation: Regular alignment of granted permissions with actual entitlements in critical systems is non-negotiable. Automated and periodic reconciliation detects orphaned accounts and excessive privileges, reducing attack surfaces. Privileged Access Management (PAM): High-risk accounts with broad capabilities must be tightly governed. PAM enforces strict controls such as just-in-time elevation, session monitoring, and audit trails to protect sensitive assets from exploitation. Timely Access Revocation: When users change roles or exit, immediate deprovisioning is crucial. Delays can leave dormant accounts vulnerable to misuse or compromise. Automated workflows ensure access rights are always in sync with current employment status and responsibilities. Principle of Least Privilege: Users should have the minimal access needed to perform their functions - nothing more. This foundational control limits exposure and contains lateral movement in case of breaches. Periodic Role Transition Audits: Role transitions are inevitable. Regular reviews of access entitlements ensure that evolving responsibilities are matched by appropriate authorizations, preventing privilege creep and segregation-of-duty violations. In a zero-trust era, identity is the new perimeter. Mature IAM programs employ multifactor authentication, continuous role audits, and real-time response to changes, providing both agility and security at enterprise scale. #IAM #CyberSecurity #IdentityManagement #PAM #ZeroTrust

Explore categories