Evaluating Workflows for Efficiency

Explore top LinkedIn content from expert professionals.

  • View profile for Vitaly Friedman
    Vitaly Friedman Vitaly Friedman is an Influencer

    Practical insights for better UX • Running “Measure UX” and “Design Patterns For AI” • Founder of SmashingMag • Speaker • Loves writing, checklists and running workshops on UX. 🍣

    223,720 followers

    ✅ How To Run Task Analysis In UX (https://lnkd.in/e_s_TG3a), a practical step-by-step guide on how to study user goals, map user’s workflows, understand top tasks and then use them to inform and shape design decisions. Neatly put together by Thomas Stokes. 🚫 Good UX isn’t just high completion rates for top tasks. 🤔 Better: high accuracy, low task on time, high completion rates. ✅ Task analysis breaks down user tasks to understand user goals. ✅ Tasks are goal-oriented user actions (start → end point → success). ✅ Usually presented as a tree (hierarchical task-analysis diagram, HTA). ✅ First, collect data: users, what they try to do and how they do it. ✅ Refine your task list with stakeholders, then get users to vote. ✅ Translate each top task into goals, starting point and end point. ✅ Break down: user’s goal → sub-goals; sub-goal → single steps. ✅ For non-linear/circular steps: mark alternate paths as branches. ✅ Scrutinize every single step for errors, efficiency, opportunities. ✅ Attach design improvements as sticky notes to each step. 🚫 Don’t lose track in small tasks: come back to the big picture. Personally, I've been relying on top task analysis for years now, kindly introduced by Gerry McGovern. Of all the techniques to capture the essence of user experience, it’s a reliable way to do so. Bring it together with task completion rates and task completion times, and you have a reliable metric to track your UX performance over time. Once you identify 10–12 representative tasks and get them approved by stakeholders, we can track how well a product is performing over time. Refine the task wording and recruit the right participants. Then give these tasks to 15–18 actual users and track success rates, time on task and accuracy of input. That gives you an objective measure of success for your design efforts. And you can repeat it every 4–8 months, depending on velocity of the team. It’s remarkably easy to establish and run, but also has high visibility and impact — especially if it tracks the heart of what the product is about. Useful resources: Task Analysis: Support Users in Achieving Their Goals (attached image), by Maria Rosala https://lnkd.in/ePmARap3 What Really Matters: Focusing on Top Tasks, by Gerry McGovern https://lnkd.in/eWBXpCQp How To Make Sense Of Any Mess (free book), by Abby Covert https://lnkd.in/enxMMhMe How We Did It: Task Analysis (Case Study), by Jacob Filipp https://lnkd.in/edKYU6xE How To Optimize UX and Improve Task Efficiency, by Ella Webber https://lnkd.in/eKdKNtsR How to Conduct a Top Task Analysis, by Jeff Sauro https://lnkd.in/eqWp_RNG [continues in the comments below ↓]

  • View profile for Raj Goodman Anand
    Raj Goodman Anand Raj Goodman Anand is an Influencer

    Helping organizations build AI operating systems | Founder, AI-First Mindset®

    23,423 followers

    Too many AI strategies are being built around the technology instead of the business challenges they should solve. The real value of AI comes when it is directly tied to your goals. I have arrived at seven lessons on how to align your AI strategy directly with your business goals: 1. Start with the "why," not the "what." Before discussing models or tools, ask what business problem you need to solve. It could be speeding up product development, or cutting operational costs. Let that answer be your guide. 2. Think in terms of business outcomes. Measure AI success by its impact on metrics like revenue growth or employee productivity not by technical accuracy. 3. Build a cross-functional team. AI can't live solely in the IT department. Include leaders from all relevant departments from day one to ensure the strategy serves the entire business. 4. Prioritize quick wins to build momentum. Identify a few small, high-impact projects that can deliver results quickly. This builds organizational confidence and makes people ready to take on larger initiatives. 5. Invest in data foundations. The best AI strategy will fail without clean and well-governed data. A disciplined approach to data quality is non-negotiable. 6. Focus on change management. Technology is the easy part. Prepare your people for new workflows and equip them with the skills to work alongside AI effectively. 7. Create a feedback loop. An AI strategy is not a one-time plan. Continuously gather feedback from users and analyze performance data to adapt and refine your approach. The goal is to make AI a part of how you achieve your objectives, not a separate project. #AIStrategy #BusinessGoals #DigitalTransformation #Leadership #ArtificialIntelligence

  • View profile for Nancy Duarte
    Nancy Duarte Nancy Duarte is an Influencer
    221,384 followers

    Most change initiatives don't fail because of the change that's happening, they fail because of how the change is communicated. I've watched brilliant restructurings collapse and transformative acquisitions unravel… Not because the plan was flawed, but because leaders were more focused on explaining the "what" and "why" than on how they were addressing the fears and concerns of the people on their team. People don't resist change because they don't understand it. They resist because they haven't been given a compelling story about their role in it. This is where the Venture Scape framework becomes invaluable. The framework maps your team's journey through five distinct stages of change: The Dream - When you envision something better and need to spark belief The Leap - When you commit to action and need to build confidence The Fight - When you face resistance and need to inspire bravery The Climb - When progress feels slow and you need to fuel endurance The Arrival - When you achieve success and need to honor the journey The key is knowing exactly where your team is in this journey and tailoring your communication accordingly. If you're announcing a merger during the Leap stage, don't deliver a message about endurance. Your team needs a moment of commitment–stories and symbols that anchor them in the decision and clarify the values that remain unchanged. You can’t know where your team is on this spectrum without talking to them. Don’t just guess. Have real conversations. Listen to their specific concerns. Then craft messages that speak directly to those fears while calling on their courage. Your job isn't just to announce change, but to walk beside your team and help your team understand what role they play in the story at each stage. #LeadershipCommunication #Illuminate

  • View profile for Aishwarya Srinivasan
    Aishwarya Srinivasan Aishwarya Srinivasan is an Influencer
    618,760 followers

    Most people evaluate LLMs by just benchmarks. But in production, the real question is- how well do they perform? When you’re running inference at scale, these are the 3 performance metrics that matter most: 1️⃣ Latency How fast does the model respond after receiving a prompt? There are two kinds to care about: → First-token latency: Time to start generating a response → End-to-end latency: Time to generate the full response Latency directly impacts UX for chat, speed for agentic workflows, and runtime cost for batch jobs. Even small delays add up fast at scale. 2️⃣ Context Window How much information can the model remember- both from the prompt and prior turns? This affects long-form summarization, RAG, and agent memory. Models range from: → GPT-3.5 / LLaMA 2: 4k–8k tokens → GPT-4 / Claude 2: 32k–200k tokens → GPT-OSS-120B: 131k tokens Larger context enables richer workflows but comes with tradeoffs: slower inference and higher compute cost. Use compression techniques like attention sink or sliding windows to get more out of your context window. 3️⃣ Throughput How many tokens or requests can the model handle per second? This is key when you’re serving thousands of requests or processing large document batches. Higher throughput = faster completion and lower cost. How to optimize based on your use case: → Real-time chat or tool use → prioritize low latency → Long documents or RAG → prioritize large context window → Agentic workflows → find a balance between latency and context → Async or high-volume processing → prioritize high throughput My 2 cents 🤌 → Choose in-region, lightweight models for lower latency → Use 32k+ context models only when necessary → Mix long-context models with fast first-token latency for agents → Optimize batch size and decoding strategy to maximize throughput Don’t just pick a model based on benchmarks. Pick the right tradeoffs for your workload. 〰️〰️〰️ Follow me (Aishwarya Srinivasan) for more AI insight and subscribe to my Substack to find more in-depth blogs and weekly updates in AI: https://lnkd.in/dpBNr6Jg

  • View profile for Ridvan Aslan

    Cyber Security Analyst at CYBLU

    3,639 followers

    When I started as a SOC Analyst, I thought the job was all about me, my SIEM, and my alerts. But I quickly realized: Even the best detection is useless if no one understands what I’m saying. If the IT team doesn’t get my request, they won’t isolate the machine. If leadership doesn’t understand the risk, they won’t support action. If developers don’t see the threat, they’ll push vulnerable code again. Here’s how I started building better communication skills — and how it changed everything: 1. Translate Technical to Practical Instead of: “We detected TTPs consistent with MITRE ATT&CK T1059 via base64-encoded PowerShell.” I now say: “We found someone trying to run malicious PowerShell on a user machine. It could lead to ransomware. We blocked it.” Simple. Clear. No jargon. 2. Listen Before You Send I used to send long, technical emails — assuming the other team would read and respond. Now, I ask: “What does the IT team care about?” (Steps to fix) “What does management care about?” (Business risk, cost) Tailoring your message is respect. 3. Speak Their Language For IT: Use system names, impact, urgency For Leadership: Talk risk, reputation, compliance For DevOps: Focus on secure coding and CI/CD integration 4. Document Your Ask Clearly I learned to write tickets or emails like this: What happened What I need from them Deadline or urgency Contact if they have questions This clarity saves time — and builds trust. Final Thought: You don’t just need to detect threats — you need to communicate them. The more clearly you speak, the faster your organization can act. Cybersecurity is a team sport. Communication is your bridge. How do you make sure your messages land across teams? #CyberSecurity #SOCAnalyst #SoftSkills #CrossTeamCommunication #BlueTeam #InfoSec #IncidentResponse #Leadership #DevSecOps #SOCLife #SecurityAwareness #CyberCareers #SpeakToLead

  • View profile for Gayatri Agrawal

    Building AI transformation company @ ALTRD

    34,926 followers

    Six months ago, a client almost pulled the plug on an AI implementation we were running. Three weeks in. Leadership was aligned. The use case was clear. The tools were live. And yet adoption had started to stall. Usage dropped. Teams quietly slipped back into old workflows. Moments like this define whether an AI project succeeds or dies. At ALTRD, our instinct isn’t to defend the system we built. Our instinct is to investigate the system we missed. So we paused the rollout and audited what was actually happening inside the workflow. What we found was instructive. The training had landed well. But the implementation had been designed around how leadership thought the team worked. Not how they actually worked. Two things were quietly breaking adoption. First, we had optimized the visible workflow but missed an invisible step. There was a key handoff happening informally between two people over WhatsApp. It wasn’t documented anywhere. It never showed up in process charts. But it was where the real decision-making happened. Our redesigned workflow skipped that moment completely. Second, there was a quiet skeptic in the system. The team lead everyone naturally looked to before trying something new had concerns she hadn’t voiced in any meeting. Not because she was resistant, but because she wasn’t convinced the workflow would hold up under real pressure. Once the team sensed that hesitation, adoption slowed down. So we fixed the system. We remapped the actual workflow, not the documented one. Then we worked directly with the team lead. Not to sell the tool, but to understand the operational concerns and redesign parts of the system around them. The engagement expanded. And that project ended up becoming one of the most valuable learning moments for how we implement AI today. Two lessons we now carry into every engagement at ALTRD: Document the informal workflow, not just the official one. And find the quiet skeptic in the room early. They’re rarely the blocker. They’re usually the signal that something important hasn’t been designed properly yet. AI implementation isn’t just a technical system. It’s a human system. And if you want adoption to stick, you have to understand both.

  • View profile for Vin Vashishta
    Vin Vashishta Vin Vashishta is an Influencer

    AI Strategist | Monetizing Data & AI For The Global 2K Since 2012 | 3X Founder | Best-Selling Author

    208,663 followers

    There’s a huge difference between ‘I got AI to do this amazing thing for social media points’ and ‘I got AI to do this thing that generates a lot of revenue for my business or our clients.’ Real-world AI is very different. Most agents require small language models. Large context windows and multiple rounds of model calls turn the unit economics of foundational models negative for many use cases. Everything we build for clients starts with local AI. We spend no more than 2 days trying to get the workflow running on the Dell Pro Max T2 in my office. If it won’t run locally, using a frontier model rarely changes that. We scale the agent to support a small set of early adopters. This phase is critical. An early adopter cohort has been trained to use agents at their earliest maturity phase. Most users would reject the agent in this raw form. But this phase is intended to rapidly improve the agent’s workflow integration, orchestration, and reliability. Human feedback from trained early adopters improves agent performance faster than any other approach I have found. We iterate on more than just the LLMs. This phase fills in the knowledge graph, improves tool usage, adds guardrails, and informs the usage of more traditional machine learning models to augment the agent. When improvements plateau, we assess the agent. It is only promoted if its impact on outcomes meets user or customer expectations. Is it valuable? How does it reorchestrate workflows? Can the business monetize it? We roll the agent out to an alpha release cohort to scale the feedback flywheel. At this point, we know we have something valuable. We’re trying to improve its reliability and handle more workflow variations before a wider launch. We only evaluate frontier model usage at this phase. We finally know enough to make targeted decisions about where in the workflow frontier model performance could make a big enough difference to be worth considering. The alpha release also reveals adoption barriers for the agent and reorchestrated workflow. Most agents require us to craft an adoption journey for users and customers. That typically includes training for internal users and a phased rollout for customers. When improvement plateaus again, the agent is ready for general release. The process takes 2-3 months, and only about 30% of the workflows we try in my office end up going the distance. Data and information architecture make a huge difference. One client with a very mature knowledge graph is seeing a workflow success rate of over 50%. Small models perform significantly better for their use cases. #DellProMax

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect & Engineer | AI Strategist

    713,396 followers

    Over the last year, I’ve seen many people fall into the same trap: They launch an AI-powered agent (chatbot, assistant, support tool, etc.)… But only track surface-level KPIs — like response time or number of users. That’s not enough. To create AI systems that actually deliver value, we need 𝗵𝗼𝗹𝗶𝘀𝘁𝗶𝗰, 𝗵𝘂𝗺𝗮𝗻-𝗰𝗲𝗻𝘁𝗿𝗶𝗰 𝗺𝗲𝘁𝗿𝗶𝗰𝘀 that reflect: • User trust • Task success • Business impact • Experience quality    This infographic highlights 15 𝘦𝘴𝘴𝘦𝘯𝘵𝘪𝘢𝘭 dimensions to consider: ↳ 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗲 𝗔𝗰𝗰𝘂𝗿𝗮𝗰𝘆 — Are your AI answers actually useful and correct? ↳ 𝗧𝗮𝘀𝗸 𝗖𝗼𝗺𝗽𝗹𝗲𝘁𝗶𝗼𝗻 𝗥𝗮𝘁𝗲 — Can the agent complete full workflows, not just answer trivia? ↳ 𝗟𝗮𝘁𝗲𝗻𝗰𝘆 — Response speed still matters, especially in production. ↳ 𝗨𝘀𝗲𝗿 𝗘𝗻𝗴𝗮𝗴𝗲𝗺𝗲𝗻𝘁 — How often are users returning or interacting meaningfully? ↳ 𝗦𝘂𝗰𝗰𝗲𝘀𝘀 𝗥𝗮𝘁𝗲 — Did the user achieve their goal? This is your north star. ↳ 𝗘𝗿𝗿𝗼𝗿 𝗥𝗮𝘁𝗲 — Irrelevant or wrong responses? That’s friction. ↳ 𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗗𝘂𝗿𝗮𝘁𝗶𝗼𝗻 — Longer isn’t always better — it depends on the goal. ↳ 𝗨𝘀𝗲𝗿 𝗥𝗲𝘁𝗲𝗻𝘁𝗶𝗼𝗻 — Are users coming back 𝘢𝘧𝘵𝘦𝘳 the first experience? ↳ 𝗖𝗼𝘀𝘁 𝗽𝗲𝗿 𝗜𝗻𝘁𝗲𝗿𝗮𝗰𝘁𝗶𝗼𝗻 — Especially critical at scale. Budget-wise agents win. ↳ 𝗖𝗼𝗻𝘃𝗲𝗿𝘀𝗮𝘁𝗶𝗼𝗻 𝗗𝗲𝗽𝘁𝗵 — Can the agent handle follow-ups and multi-turn dialogue? ↳ 𝗨𝘀𝗲𝗿 𝗦𝗮𝘁𝗶𝘀𝗳𝗮𝗰𝘁𝗶𝗼𝗻 𝗦𝗰𝗼𝗿𝗲 — Feedback from actual users is gold. ↳ 𝗖𝗼𝗻𝘁𝗲𝘅𝘁𝘂𝗮𝗹 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴 — Can your AI 𝘳𝘦𝘮𝘦𝘮𝘣𝘦𝘳 𝘢𝘯𝘥 𝘳𝘦𝘧𝘦𝘳 to earlier inputs? ↳ 𝗦𝗰𝗮𝗹𝗮𝗯𝗶𝗹𝗶𝘁𝘆 — Can it handle volume 𝘸𝘪𝘵𝘩𝘰𝘶𝘵 degrading performance? ↳ 𝗞𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹 𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝗰𝘆 — This is key for RAG-based agents. ↳ 𝗔𝗱𝗮𝗽𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗦𝗰𝗼𝗿𝗲 — Is your AI learning and improving over time? If you're building or managing AI agents — bookmark this. Whether it's a support bot, GenAI assistant, or a multi-agent system — these are the metrics that will shape real-world success. 𝗗𝗶𝗱 𝗜 𝗺𝗶𝘀𝘀 𝗮𝗻𝘆 𝗰𝗿𝗶𝘁𝗶𝗰𝗮𝗹 𝗼𝗻𝗲𝘀 𝘆𝗼𝘂 𝘂𝘀𝗲 𝗶𝗻 𝘆𝗼𝘂𝗿 𝗽𝗿𝗼𝗷𝗲𝗰𝘁𝘀? Let’s make this list even stronger — drop your thoughts 👇

  • View profile for Leon Palafox
    Leon Palafox Leon Palafox is an Influencer

    AI Strategist and Innovation Leader | Turning data and AI into measurable business outcomes

    30,813 followers

    Once, we built a machine learning model that was expected to drive a 15% lift in conversions. The result? A shocking 0.01%. What went wrong? The model worked perfectly, but the business process behind it was too long and complex. By the time the offer reached the clients, most leads were lost. And the kicker? The business case was literally giving money to the clients! This experience taught us a crucial lesson: even the best machine learning model can fail without an aligned, efficient business process. The model had identified high-value leads, but the operational workflow to turn those leads into conversions was cumbersome and slow. It involved multiple handoffs, redundant steps, and delays that made it nearly impossible for the offer to reach the client in time. In this case, the problem wasn’t technical—it was systemic. The gap between predictive insights and actionable outcomes created friction that nullified the model's value. When we revisited the process, we streamlined the journey from the model’s output to client interaction. By reducing the time and steps involved, we saw significant improvements—not just in conversion rates but also in the trust clients placed in the business. This is why aligning AI models with business operations is just as critical as building accurate models. Are your machine learning projects driving real business impact, or are they stuck in the pipeline? Let’s discuss strategies to close the gap and unlock the full potential of your AI investments. Share your thoughts or experiences below!

  • View profile for Arlene Dickinson
    Arlene Dickinson Arlene Dickinson is an Influencer

    #TeamCanada 🇨🇦 Managing General Partner at District Ventures Capital

    388,932 followers

    The idea of increasing speed in business by 10% can be very tempting, especially for entrepreneurs who often feel huge pressure to capitalize on opportunities before they vanish. However, it's crucial to distinguish between speed and urgency. Speed focuses on how quickly tasks are completed, often prioritizing rapid execution over other important considerations. This leads to: 1. Compromised Quality: Rushing through tasks can result in mistakes, lower quality, and ultimately damage to your brand. 2. Burnout: Constantly working at a high speed can lead to burnout, reducing overall productivity and company morale in the long run. 3. Shallow Work: Fast work often means less time for deep, strategic thinking, which is essential for innovation and problem-solving. Urgency on the other hand emphasizes working with purpose, clear goals, and timelines, without necessarily rushing. This approach can: 1. Enhance Quality; Focusing on doing things right ensures that the output is reliable and of high quality. 2. Sustain Momentum: A steady, deliberate pace can be more sustainable, helping maintain team energy and engagement. 3. Encourage Strategic Thinking: Working with intent allows for more thorough analysis, planning, and execution. Of course it depends on the task we are talking about because let’s face it some things just have to be rushed. But, while increasing speed by 10% overall might seem beneficial in the near term, it’s often much more effective to prioritize doing things correctly and with purpose. For businesses to win in the long run, entrepreneurs should try to balance urgency and quality, moving forward steadily and smartly rather than hastily and recklessly.

Explore categories