𝗗𝗮𝘁𝗮 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗶𝘀 𝗼𝗻𝗲 𝗼𝗳 𝘁𝗵𝗲 𝗺𝗼𝘀𝘁 𝗺𝗶𝘀𝘂𝗻𝗱𝗲𝗿𝘀𝘁𝗼𝗼𝗱 𝘁𝗼𝗽𝗶𝗰𝘀 𝗶𝗻 𝗲𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲. Because most people explain it from the inside out: policies, councils, standards, stewardship. But the business does not buy any of that. The business buys outcomes: → trustworthy KPIs → vendor and partner data you can actually use → faster financial close → fewer reporting escalations → smoother M&A integration → AI you can deploy without creating risk debt Most AI programs fail for boring reasons: nobody owns the data, quality is unknown, access is messy, accountability is missing. 𝗦𝗼 𝗹𝗲𝘁’𝘀 𝘀𝗶𝗺𝗽𝗹𝗶𝗳𝘆 𝗶𝘁. 𝗗𝗮𝘁𝗮 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗶𝘀 𝗳𝗼𝘂𝗿 𝘁𝗵𝗶𝗻𝗴𝘀: → ownership → quality → access → accountability 𝗔𝗻𝗱 𝗶𝘁 𝗯𝗲𝗰𝗼𝗺𝗲𝘀 𝘃𝗲𝗿𝘆 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗮𝗹 𝘄𝗵𝗲𝗻 𝘆𝗼𝘂 𝘁𝗵𝗶𝗻𝗸 𝗶𝗻 𝟰 𝗹𝗮𝘆𝗲𝗿𝘀: 1. Data Products (what the business consumes) → a named dataset with an owner and SLA → clear definitions + metric logic → documented inputs/outputs and intended use → discoverable in a catalog → versioned so changes don’t break reporting 2. Data Management (how products stay reliable) → quality rules + monitoring (freshness, completeness, accuracy) → lineage (where it came from, where it’s used) → master/reference data alignment → metadata management (business + technical) → access controls and retention rules 3. Data Governance (who decides, who is accountable) → data ownership model (domain owners, stewards) → decision rights: who can change KPI definitions, thresholds, and sources → issue management: triage, escalation paths, resolution SLAs → policy enforcement: what’s mandatory vs optional → risk and compliance alignment (auditability, approvals) 4. Data Operating Model (how you scale across the enterprise) → domain-based setup (data mesh or not, but clear domains) → operating cadence: weekly issue review, monthly KPI governance, quarterly standards → stewardship at scale (roles, capacity, incentives) → cross-domain decision-making for shared metrics → enablement: templates, playbooks, tooling support If you want to start fast: Pick the 10 metrics that run the business. Assign an owner. Define decision rights + escalation. Then build the data products around them. ↓ 𝗜𝗳 𝘆𝗼𝘂 𝘄𝗮𝗻𝘁 𝘁𝗼 𝘀𝘁𝗮𝘆 𝗮𝗵𝗲𝗮𝗱 𝗮𝘀 𝗔𝗜 𝗿𝗲𝘀𝗵𝗮𝗽𝗲𝘀 𝘄𝗼𝗿𝗸 𝗮𝗻𝗱 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀, 𝘆𝗼𝘂 𝘄𝗶𝗹𝗹 𝗴𝗲𝘁 𝗮 𝗹𝗼𝘁 𝗼𝗳 𝘃𝗮𝗹𝘂𝗲 𝗳𝗿𝗼𝗺 𝗺𝘆 𝗳𝗿𝗲𝗲 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿: https://lnkd.in/dbf74Y9E
Productivity
Explore top LinkedIn content from expert professionals.
-
-
🔎 𝗟𝗼𝗼𝗸𝗶𝗻𝗴 𝗶𝗻𝘀𝗶𝗱𝗲 𝗮𝗻 𝗮𝗰𝘁𝘂𝗮𝗹 AMD 𝗰𝗵𝗶𝗽! 😲 Here's a bit of a Ryzen processor made on TSMC's 7-nanometer node. You can see the web of interconnects, the metal wires that connect the transistors (that bottom layer) on a chip to harness their computing power. The image was taken with a new 𝗽𝘁𝘆𝗰𝗵𝗼𝗴𝗿𝗮𝗽𝗵𝗶𝗰 𝗫-𝗿𝗮𝘆 𝗹𝗮𝗺𝗶𝗻𝗼𝗴𝗿𝗮𝗽𝗵𝘆 (𝗣𝘆𝗫𝗟) technique out of the PSI Paul Scherrer Institut, University of Southern California and ETH Zürich. The technique currently has 4 nanometer resolution and the scientists have a path to get to 1 nm resolution. The cool thing about this technology is its non-destructive imaging power to help find defects in chips. Today’s chips are so complicated that electrical tests alone can no longer pinpoint where a defect is: chipmakers use a mix of optical imaging and other methods to zero in on potential problem areas. They then image such areas with a slow but very high-resolution scanning electron microscope. Finally they might take a slice of a chip for further imaging with a transmission electron microscope (TEM). When they find the flaw, they can then go back and correct their design. But with PyXL, they have another tool to pinpoint defects without destroying the chip. ✨
-
In companies where productivity has increased by 50%, creativity has doubled, and employee satisfaction is at an all-time high, one surprising change stands out: ditching the outdated obsession with time tracking. Too many managers are stuck in an outdated paradigm, fixating on: • When employees clock in • How long they sit at their desks • Micromanaging daily schedules But we’ve hired smart, capable professionals. Treating them like children who need constant supervision is not just demeaning – it's counterproductive. However, it's crucial to maintain a balance. While micromanagement is detrimental, companies still need to ensure discipline and focus on key priorities. The goal is to empower employees while aligning their efforts with organizational objectives. That’s why one needs to focus on result-focused management: 1. Shift your metrics: Focus on project milestones, work quality, and client satisfaction instead of hours logged. 2. Embrace flexibility: Allow flexible hours and remote work when possible. Trust employees to manage their time effectively. 3. Cultivate a culture of trust: Communicate openly about priorities and challenges. Reward results, not face time. Promote work-life balance and well-being. Companies like Netflix, Basecamp, and Atlassian have implemented results-only work environments (ROWE) with remarkable success. They report higher employee engagement, better outcomes, and a more dynamic, innovative workplace culture. What's one positive outcome you've experienced (as a manager or employee) when given more autonomy at work? #Leadership #EmployeeEmpowerment #WorkplaceCulture
-
💎 Accessibility For Designers Checklist (PDF: https://lnkd.in/e9Z2G2kF), a practical set of cards on WCAG accessibility guidelines, from accessible color, typography, animations, media, layout and development — to kick-off accessibility conversations early on. Kindly put together by Geri Reid. WCAG for Designers Checklist, by Geri Reid Article: https://lnkd.in/ef8-Yy9E PDF: https://lnkd.in/e9Z2G2kF WCAG 2.2 Guidelines: https://lnkd.in/eYmzrNh7 Accessibility isn’t about compliance. It’s not about ticking off checkboxes. And it’s not about plugging in accessibility overlays or AI engines either. It’s about *designing* with a wide range of people in mind — from the very start, independent of their skills and preferences. In my experience, the most impactful way to embed accessibility in your work is to bring a handful of people with different needs early into design process and usability testing. It’s making these test sessions accessible to the entire team, and showing real impact of design and code on real people using a real product. Teams usually don’t get time to work on features which don’t have a clear business case. But no manager really wants to be seen publicly ignoring their prospect customers. Visualize accessibility to everyone on the team and try to make an argument about potential reach and potential income. Don’t ask for big commitments: embed accessibility in your work by default. Account for accessibility needs in your estimates. Create accessibility tickets and flag accessibility issues. Don’t mistake smiling and nodding for support — establish timelines, roles, specifics, objectives. And most importantly: measure the impact of your work by repeatedly conducting accessibility testing with real people. Build a strong before/after case to show the change that the team has enabled and contributed to, and celebrate small and big accessibility wins. It might not sound like much, but it can start changing the culture faster than you think. Useful resources: Giving A Damn About Accessibility, by Sheri Byrne-Haber (disabled) https://lnkd.in/eCeFutuJ Accessibility For Designers: Where Do I Start?, by Stéphanie Walter https://lnkd.in/ecG5qASY Web Accessibility In Plain Language (Free Book), by Charlie Triplett https://lnkd.in/e2AMAwyt Building Accessibility Research Practices, by Maya Alvarado https://lnkd.in/eq_3zSPJ How To Build A Strong Case For Accessibility, ↳ https://lnkd.in/ehGivAdY, by 🦞 Todd Libby ↳ https://lnkd.in/eC4jehMX, by Yichan Wang #ux #accessibility
-
Last week, I described four design patterns for AI agentic workflows that I believe will drive significant progress: Reflection, Tool use, Planning and Multi-agent collaboration. Instead of having an LLM generate its final output directly, an agentic workflow prompts the LLM multiple times, giving it opportunities to build step by step to higher-quality output. Here, I'd like to discuss Reflection. It's relatively quick to implement, and I've seen it lead to surprising performance gains. You may have had the experience of prompting ChatGPT/Claude/Gemini, receiving unsatisfactory output, delivering critical feedback to help the LLM improve its response, and then getting a better response. What if you automate the step of delivering critical feedback, so the model automatically criticizes its own output and improves its response? This is the crux of Reflection. Take the task of asking an LLM to write code. We can prompt it to generate the desired code directly to carry out some task X. Then, we can prompt it to reflect on its own output, perhaps as follows: Here’s code intended for task X: [previously generated code] Check the code carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it. Sometimes this causes the LLM to spot problems and come up with constructive suggestions. Next, we can prompt the LLM with context including (i) the previously generated code and (ii) the constructive feedback, and ask it to use the feedback to rewrite the code. This can lead to a better response. Repeating the criticism/rewrite process might yield further improvements. This self-reflection process allows the LLM to spot gaps and improve its output on a variety of tasks including producing code, writing text, and answering questions. And we can go beyond self-reflection by giving the LLM tools that help evaluate its output; for example, running its code through a few unit tests to check whether it generates correct results on test cases or searching the web to double-check text output. Then it can reflect on any errors it found and come up with ideas for improvement. Further, we can implement Reflection using a multi-agent framework. I've found it convenient to create two agents, one prompted to generate good outputs and the other prompted to give constructive criticism of the first agent's output. The resulting discussion between the two agents leads to improved responses. Reflection is a relatively basic type of agentic workflow, but I've been delighted by how much it improved my applications’ results. If you’re interested in learning more about reflection, I recommend: - Self-Refine: Iterative Refinement with Self-Feedback, by Madaan et al. (2023) - Reflexion: Language Agents with Verbal Reinforcement Learning, by Shinn et al. (2023) - CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing, by Gou et al. (2024) [Original text: https://lnkd.in/g4bTuWtU ]
-
It’s simple math 🧐 I use to think that motivation was the key to monumental success. Long story short, it’s not. It’s about the little things you do every day that will take you from reasonable to slightly unreasonable to completely unreasonable progress. Your future is not defined by how motivated you are, but by your daily routines and systems. I believe in this so much that we named our company Butterfly 3ffect to reflect the value of incremental gains. we believe that that’s how the best people and brands grow. Here’s how you grow the small way: 1. Start by setting achievable goals, like reading one chapter of a book each day or going for a short walk 2. Practice gratitude by writing down three things you're thankful for every night before bed 3. Engage in daily self-reflection, even if it's just for a few minutes, to assess your thoughts and actions 4. Incorporate small acts of kindness into your daily routine, like holding the door for someone or offering a genuine compliment 5. Learn something new every day, whether it's a fun fact, a new word, or a new skill 6. Prioritise self-care by getting enough sleep, staying hydrated, and taking breaks when needed 7. Surround yourself with positive influences, whether it's uplifting books, supportive friends, or inspiring podcasts 8. Embrace failure as a learning opportunity and a stepping stone to growth 9. Stay consistent and patient, knowing that small progress over time adds up to significant improvement 10. Celebrate your achievements, no matter how small, to stay motivated and encouraged along the way.
-
The silent productivity killer you've never heard of... Attention Residue (and 3 strategies to fight back): The concept of "attention residue" was first identified by University of Washington business professor Dr. Sophie Leroy in 2009. The idea is quite simple: There is a cognitive cost to shifting your attention from one task to another. When our attention is shifted, there is a "residue" that remains in the brain and impairs our cognitive performance on the new task. Put differently, you may think your attention has fully shifted to the next task, but your brain has a lag—it thinks otherwise! It's relatively easy to find examples of this effect in your own life: • You get on a call but are still thinking about the prior call. • An email pops up during meeting and derails your focus. • You check your phone during a lecture and can't refocus afterwards. There are two key points worth noting here: 1. The research indicates it doesn't seem to matter whether the task switch is "macro" (i.e. moving from one major task to the next) or "micro" (i.e. pausing one major task for a quick check on some minor task). 2. The challenge is even more pronounced in a remote/hybrid world, where we're free to roam the internet, have our chat apps open, and check our phones all while appearing to be focused in a Zoom meeting. With apologies to any self-proclaimed proficient multitaskers, the research is very clear: Every single time you call upon your brain to move away from one task and toward another, you are hurting its performance—your work quality and efficiency suffer. Author Cal Newport puts it well: "If, like most, you rarely go more than 10–15 minutes without a just check, you have effectively put yourself in a persistent state of self-imposed cognitive handicap." Here are three strategies to manage attention residue and fight back: 1. Focus Work Blocks: Block time on your calendar for sprints of focused energy. Set a timer for a 45-90 minute window, close everything except the task at hand, and focus on one thing. It works wonders. 2. Take a Breather: Whenever possible, create open windows of 5-15 minutes between higher value tasks. Schedule 25-minute calls. Block those windows on your calendar. During them, take a walk or close your eyes and breathe. 3. Batch Processing: You still have to reply to messages and emails. Pick a few windows during the day when you will deeply focus on the task of processing and replying to these. Your response quality will go up from this batching, and they won't bleed into the rest of your day. Attention residue is a silent killer of your work quality and efficiency. Understanding it—and taking the steps to fight back—will have an immediate positive impact on your work and life. If you enjoyed this or learned something, share it with others and follow me Sahil Bloom for more in future! The beautiful visualization is by Roberto Ferraro.
-
MCP is an MVP if you are exploring ways to supercharge your AI workflows. I am very impressed by the MCP (Model Context Protocol) architecture and proud of the way we have embraced it at Visa to accelerate our GAI work. MCP is an open standard, introduced by Anthropic. It acts like a universal connector, seamlessly linking AI applications to external tools, data, and services. Think of it as Bluetooth for AI – enabling plug and play integrations without multiple, messy connections and custom code. For companies embracing the power of GAI, MCP is a dream come true. It eliminates the headache of building bespoke API integration for every tool, letting AI agents access resources like file systems, wikis, shared drives, databases etc in real time. This means your AI can pull custom data, automate tasks or analyze reports instantly. As an early adopter, we are already using MCP to streamline workflows and with 1000s of community built MCP servers, the eco system is exploding. My advice to those beginning their MCP journey – start small. Identify a repetitive task (like updating CRM records or generating analysis). Setup an MCP server for your tool or service (many are prebuilt), connect it to your AI client and watch the magic happen. Experiment, scale, and explore the open-source MCP community for inspiration. Once you start using MCP, you will see a step function increase in your innovation velocity.
-
Innovation isn’t just about upgrading your tools—it’s about reinventing how you create, deliver, and capture value. Digital business models are reshaping industries by creating value in ways unimaginable a decade ago. These aren't your grandparent’s business models with a digital veneer—they're transformative, leveraging tech to disrupt markets, engage customers, and redefine competition. This revolution is captured brilliantly in the book: 𝐷𝑖𝑔𝑖𝑡𝑎𝑙 𝐵𝑢𝑠𝑖𝑛𝑒𝑠𝑠 𝑀𝑜𝑑𝑒𝑙𝑠 𝑓𝑜𝑟 𝐼𝑛𝑑𝑢𝑠𝑡𝑟𝑦 4.0: 𝐻𝑜𝑤 𝐼𝑛𝑛𝑜𝑣𝑎𝑡𝑖𝑜𝑛 𝑎𝑛𝑑 𝑇𝑒𝑐ℎ𝑛𝑜𝑙𝑜𝑔𝑦 𝑆ℎ𝑎𝑝𝑒 𝑡ℎ𝑒 𝐹𝑢𝑡𝑢𝑟𝑒 𝑜𝑓 𝐶𝑜𝑚𝑝𝑎𝑛𝑖𝑒𝑠. 𝐅𝐨𝐮𝐫 𝐏𝐢𝐥𝐥𝐚𝐫𝐬 𝐨𝐟 𝐃𝐢𝐠𝐢𝐭𝐚𝐥 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐌𝐨𝐝𝐞𝐥𝐬: • 𝐃𝐢𝐠𝐢𝐭𝐚𝐥𝐥𝐲 𝐄𝐧𝐚𝐛𝐥𝐞𝐝 𝐕𝐚𝐥𝐮𝐞 𝐂𝐫𝐞𝐚𝐭𝐢𝐨𝐧: Value driven by tech, not just supported by it. Think smart thermostats optimizing energy, not just controlling it. • 𝐌𝐚𝐫𝐤𝐞𝐭 𝐍𝐨𝐯𝐞𝐥𝐭𝐲: New offerings or ways of doing business—like predictive maintenance or on-demand manufacturing. • 𝐃𝐢𝐠𝐢𝐭𝐚𝐥 𝐂𝐮𝐬𝐭𝐨𝐦𝐞𝐫 𝐓𝐨𝐮𝐜𝐡𝐩𝐨𝐢𝐧𝐭𝐬: Customer relationships built through apps, IoT, and connected services. • 𝐃𝐢𝐠𝐢𝐭𝐚𝐥𝐥𝐲 𝐃𝐞𝐫𝐢𝐯𝐞𝐝 𝐔𝐒𝐏: Unique selling points rooted in data and digital capabilities. But how do we map the revenue streams emerging from these shifting dynamics? I’ve come to see it through three essential components: • 𝐂𝐨𝐫𝐞 𝐕𝐚𝐥𝐮𝐞 𝐏𝐫𝐨𝐩𝐨𝐬𝐢𝐭𝐢𝐨𝐧 (What is being offered?) • 𝐕𝐚𝐥𝐮𝐞 𝐂𝐫𝐞𝐚𝐭𝐢𝐨𝐧 𝐌𝐞𝐜𝐡𝐚𝐧𝐢𝐬𝐦𝐬 (How is value created?) • 𝐑𝐞𝐯𝐞𝐧𝐮𝐞 𝐒𝐭𝐫𝐞𝐚𝐦𝐬 (How is value captured?) 𝐑𝐞𝐚𝐝 𝐟𝐮𝐥𝐥 𝐚𝐫𝐭𝐢𝐜𝐥𝐞: https://lnkd.in/ewhRUM28 ******************************************* • Visit www.jeffwinterinsights.com for access to all my content and to stay current on Industry 4.0 and other cool tech trends • Ring the 🔔 for notifications!
-
By now, the "95% failure rate" of GenAI financial returns (ref MIT's Project NANDA) is part of all consulting decks. The report blames the incorrect approach as the primary reason, rather than model maturity, etc. The key is to understand what #ROI metrics are used to determine the financial returns. I asked #Copilot on this, and here's what it told me: --- Here are three examples of ROI frameworks that enterprises are using to evaluate and scale GenAI adoption effectively: 1. Business Outcome-Based ROI Framework (Gartner) Summary: Gartner recommends aligning GenAI initiatives with measurable business outcomes such as cost reduction, revenue growth, or productivity gains. For example, a retail company using GenAI for automated product descriptions tracked a 22% increase in conversion rates and a 15% reduction in content creation costs. The framework emphasizes setting baseline metrics before deployment and tracking improvements post-implementation. 🔗 https://lnkd.in/dER7cTeF 2. Time-to-Value and Efficiency Metrics (BCG) Summary: Boston Consulting Group suggests using time-to-value (TTV) and operational efficiency as key ROI indicators. In one case, a logistics firm used GenAI to optimize routing, reducing delivery times by 18% and fuel costs by 12%. BCG’s framework includes pre/post comparisons, automation impact, and employee productivity metrics to quantify GenAI’s contribution. 🔗 https://lnkd.in/da2zcSfW 3. Model Performance vs. Business KPIs (McKinsey) Summary: McKinsey advocates for linking GenAI model performance directly to business KPIs. For instance, a financial services firm used GenAI for customer support automation and tracked resolution time, customer satisfaction scores, and call deflection rates. The framework includes continuous monitoring of model accuracy, relevance, and business impact. 🔗 https://lnkd.in/dA6zEGuS 🔑 Key Message Summary Effective GenAI ROI frameworks combine technical performance metrics with business impact indicators. Leading approaches include tracking cost savings, productivity gains, time-to-value, and alignment with strategic KPIs. Enterprises that define success upfront and monitor outcomes continuously are more likely to scale GenAI successfully. --- The direction taken seems to be well-intentioned. However, the measure of success is not quite what might lead to real solid business outcomes! Individual productivity improvements are just that! They don't scale across the organization unless "vertically scaled" top-to-down an entire process delivering bottomline improvements, which then need to be further "horizontally scaled" end-to-end across the entire value chain of the firm to deliver topline value! My forthcoming book on Cognitive Chasm provides actionable guidance to practitioners on this.
Explore categories
- Hospitality & Tourism
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development