Mastering Coding Challenges

Explore top LinkedIn content from expert professionals.

  • View profile for Andrew Ng
    Andrew Ng Andrew Ng is an Influencer

    DeepLearning.AI, AI Fund and AI Aspire

    2,430,352 followers

    Last week, I described four design patterns for AI agentic workflows that I believe will drive significant progress: Reflection, Tool use, Planning and Multi-agent collaboration. Instead of having an LLM generate its final output directly, an agentic workflow prompts the LLM multiple times, giving it opportunities to build step by step to higher-quality output. Here, I'd like to discuss Reflection. It's relatively quick to implement, and I've seen it lead to surprising performance gains. You may have had the experience of prompting ChatGPT/Claude/Gemini, receiving unsatisfactory output, delivering critical feedback to help the LLM improve its response, and then getting a better response. What if you automate the step of delivering critical feedback, so the model automatically criticizes its own output and improves its response? This is the crux of Reflection. Take the task of asking an LLM to write code. We can prompt it to generate the desired code directly to carry out some task X. Then, we can prompt it to reflect on its own output, perhaps as follows: Here’s code intended for task X: [previously generated code] Check the code carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it. Sometimes this causes the LLM to spot problems and come up with constructive suggestions. Next, we can prompt the LLM with context including (i) the previously generated code and (ii) the constructive feedback, and ask it to use the feedback to rewrite the code. This can lead to a better response. Repeating the criticism/rewrite process might yield further improvements. This self-reflection process allows the LLM to spot gaps and improve its output on a variety of tasks including producing code, writing text, and answering questions. And we can go beyond self-reflection by giving the LLM tools that help evaluate its output; for example, running its code through a few unit tests to check whether it generates correct results on test cases or searching the web to double-check text output. Then it can reflect on any errors it found and come up with ideas for improvement. Further, we can implement Reflection using a multi-agent framework. I've found it convenient to create two agents, one prompted to generate good outputs and the other prompted to give constructive criticism of the first agent's output. The resulting discussion between the two agents leads to improved responses. Reflection is a relatively basic type of agentic workflow, but I've been delighted by how much it improved my applications’ results. If you’re interested in learning more about reflection, I recommend: - Self-Refine: Iterative Refinement with Self-Feedback, by Madaan et al. (2023) - Reflexion: Language Agents with Verbal Reinforcement Learning, by Shinn et al. (2023) - CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing, by Gou et al. (2024) [Original text: https://lnkd.in/g4bTuWtU ]

  • View profile for Zain Kahn
    Zain Kahn Zain Kahn is an Influencer

    Follow me to learn how you can leverage AI to boost your productivity and accelerate your career. Scaled products to 10 Million+ users.

    999,430 followers

    This guy from Cambridge plays with Claude Code like he owns it. He talks about his workflow in 2026 and here's what's new: 1. Video-Based Specification (Spec Phase) > Screen Recording: Instead of writing a spec from scratch, find an existing product that is similar to your idea. Record your screen while using it and talking through your specific feature ideas and changes. > Generate PRD: Upload this video to Gemini 1.5 Pro (referred to as Gemini 3 Pro/Free Pro in the video context) and ask it to generate a Product Requirement Document (PRD). > Refine Spec: Use the "Ask User Question" tool in Claude Code. Prompt it to interview you about the generated spec to fill in missing details (e.g., "How should the emoji picker be positioned?"). > Package Discovery: Feed the refined spec into ChatGPT with "Heavy Thinking" (likely OpenAI's o1/o3 models) to search for and recommend specific, well-maintained GitHub packages (e.g., for a WYSIWYG editor) to avoid building complex components from scratch. 2. The Orchestrator Role > Design Feedback Loops: Your primary job is not to write code but to design loops where the agent can build, fail, and learn. > Monitor & Update: Watch the agent’s reasoning. If it makes a mistake, don't just fix the code—update the claude. md (project instructions file) to prevent that specific mistake from happening again. > High-Level Decisions: Make the architectural decisions the agent can't, such as choosing the database or specific tools. 3. Model Choice & Tools > Model Selection: Use Opus 4.5 for building large-scale features and GPT-5.2 for architecture and debugging (specific future models mentioned in his 2026 context). > Voice Dictation: Use HyperWhisper to dictate prompts significantly faster than typing. 4. Execution & "Parallel Vibe Coding" > Parallel Agents: For small, well-defined tasks (like extracting hard-coded strings for translations), spin up multiple sub-agents to work on different parts of the project simultaneously. > Avoid Conflicts: Do not use parallel agents for large features within the same project to avoid complex merge conflicts ("meshing issues"). > Sub-agents for Multi-Project Fixes: If a bug exists in a template used by multiple projects, spin up one sub-agent per project to fix them all in parallel. 5. Review & Maintenance > Planning Mode: Use Claude Code’s "Planning Mode" to prevent Architectural Drift, ensuring the agent sticks to the original design vision over time. > Shape of Diffs: Inspect the "shape" of the code changes (diffs) to ensure they are manageable and align with expectations before accepting them. > Forking for Learning: If the agent does something surprising or complex, fork the session. In the forked session, ask "Why did you do that?" or request diagrams to understand the code without polluting the context of the main working session. Get more tutorials here: https://lnkd.in/e64Jvdrt

  • View profile for Rajya Vardhan Mishra

    Engineering Leader @ Google | Mentored 300+ Software Engineers | Building high-performance teams | Tech Speaker | Led $1B+ programs | Cornell University | Lifelong learner driven by optimism & growth mindset

    112,343 followers

    In the last 15 years, I have interviewed 800+ Software Engineers across Google, Paytm, Amazon & various startups. Here are the most actionable tips I can give you on how to approach  solving coding problems in Interviews  (My DMs are always flooded with this particular question) 1. Use a Heap for K Elements      - When finding the top K largest or smallest elements, heaps are your best tool.      - They efficiently handle priority-based problems with O(log K) operations.      - Example: Find the 3 largest numbers in an array.   2. Binary Search or Two Pointers for Sorted Inputs      - Sorted arrays often point to Binary Search or Two Pointer techniques.      - These methods drastically reduce time complexity to O(log n) or O(n).      - Example: Find two numbers in a sorted array that add up to a target.   3. Backtracking    - Use Backtracking to explore all combinations or permutations.      - They’re great for generating subsets or solving puzzles.      - Example: Generate all possible subsets of a given set.   4. BFS or DFS for Trees and Graphs      - Trees and graphs are often solved using BFS for shortest paths or DFS for traversals.      - BFS is best for level-order traversal, while DFS is useful for exploring paths.      - Example: Find the shortest path in a graph.   5. Convert Recursion to Iteration with a Stack      - Recursive algorithms can be converted to iterative ones using a stack.      - This approach provides more control over memory and avoids stack overflow.      - Example: Iterative in-order traversal of a binary tree.   6. Optimize Arrays with HashMaps or Sorting      - Replace nested loops with HashMaps for O(n) solutions or sorting for O(n log n).      - HashMaps are perfect for lookups, while sorting simplifies comparisons.      - Example: Find duplicates in an array.   7. Use Dynamic Programming for Optimization Problems      - DP breaks problems into smaller overlapping sub-problems for optimization.      - It's often used for maximization, minimization, or counting paths.      - Example: Solve the 0/1 knapsack problem.   8. HashMap or Trie for Common Substrings      - Use HashMaps or Tries for substring searches and prefix matching.      - They efficiently handle string patterns and reduce redundant checks.      - Example: Find the longest common prefix among multiple strings.   9. Trie for String Search and Manipulation      - Tries store strings in a tree-like structure, enabling fast lookups.      - They’re ideal for autocomplete or spell-check features.      - Example: Implement an autocomplete system.   10. Fast and Slow Pointers for Linked Lists      - Use two pointers moving at different speeds to detect cycles or find midpoints.      - This approach avoids extra memory usage and works in O(n) time.      - Example: Detect if a linked list has a loop.   💡 Save this for your next interview prep!

  • View profile for Navveen Balani
    Navveen Balani Navveen Balani is an Influencer

    LinkedIn Top Voice | Google Cloud Fellow | Chair - Standards Working Group @ Green Software Foundation | Driving Sustainable AI Innovation & Specification | Award-winning Author | Let’s Build a Responsible Future

    12,174 followers

    From Code Generation to System Integration: Why AI Coding Tools and Agentic IDEs Must Evolve to Solve Real Software Development Challenges Since GPT-3 went mainstream, AI coding tools have sprinted through three waves. 1. First came smart autocomplete. 2. Then came cloud companions tuned to specific stacks. 3. Now we’re in the agent wave – tools that read whole repos, open terminals, run tests and raise pull requests on their own. Every cycle starts the same way: Wow. Impressive. Look at how much this can do for me. But the uncomfortable truth is this: most of what these tools automate is commodity knowledge. Framework boilerplate, CRUD patterns, standard integration glue, typical test shapes – once a pattern exists in public code, a model can learn it and repeat it very well. That used to feel like expertise. Now it’s autocomplete on steroids. The real problems have barely moved: • Design and architecture. Not just file-by-file edits, but coherent system design: boundaries, contracts, data flows, failure modes, performance budgets – a holistic solution, not local patchwork. •  End-to-end SDLC integration. How change actually flows from idea to production: design, review, CI, approvals, environments, rollout strategies and on-call ownership. • Change management and legacy transformation. How to evolve decade-old systems, untangle hidden dependencies, migrate behaviour safely and avoid breaking everything that still quietly depends on “that old module”. • Traceability. Knowing who or what changed what, why, and what else was impacted – across code, configs, data pipelines and policies. • How strongly workflows enforce the top 10 principles like reliability, security, cost and maintainability that were outlined in the earlier post – not as posters on a wall, but as gates every change must pass through. This is where vibe-coding tools become dangerous. The model writes the feature, generates the tests, explains the diff. Everything looks green. It feels safe enough to ship on vibe. Without deep expertise and a solid workflow around it, that is not productivity. It is an efficient way to inject new risk into a live system. If code patterns are now cheap, differentiation shifts somewhere else: • To how clearly an organisation defines how systems should be built and evolved • To how tightly AI tools are integrated with that SDLC, not just with the editor • To how well workflows embody design principles, change discipline and traceability by default Writing code is becoming a commodity. However, writing holistic, thoughtful systems, and continuously evolving and governing them safely, is where the true value lies AI coding copilots and agentic IDEs now need to evolve from “look what I can generate” to “look how I help you integrate, operate and transform”. That’s when it stops being “wow, impressive demo” and becomes “yes – this is finally solving the real problem.”

  • View profile for Lenny Rachitsky
    Lenny Rachitsky Lenny Rachitsky is an Influencer

    Deeply researched no-nonsense product, growth, and career advice

    349,510 followers

    My biggest takeaways from Lazar Jovanovic (a full-time vibe-coder at Lovable): 1. Vibe coding is now a full-time professional career. Lazar Jovanovic gets paid to build internal tools and public-facing products using AI. His work ranges from Shopify integrations and merch stores to complex internal dashboards tracking feature adoption. Companies across the S&P 500 are actively hiring people with these skills. 2. Not having a technical background can be an advantage when vibe coding. People without engineering experience don’t know what’s “supposedly impossible,” so they try things that technical people might dismiss. Lazar built Chrome extensions and desktop apps with Lovable because he didn’t know the technical reasons why those shouldn’t work. The willingness to try anything until proven wrong unlocks capabilities others overlook. 3. Coding is no longer the bottleneck—clarity is. Lazar spends 80% of his time planning and chatting with AI agents and only 20% building. The ceiling on AI output isn’t model intelligence; it’s what the model is told and shown before it acts. Your job is to provide clear context and instructions, not to write code. 4. AI tools have a limited context window that you must manage deliberately. Think of it like a genie granting you three wishes. Every request consumes tokens for reading, thinking, and executing. If you dump vague requests, the AI spends the most tokens figuring out what you want and has little left for quality output. 5. Start five parallel builds to find clarity your thinking faster. Instead of overthinking one design, brain dump into the first project, get more specific in the second, add visual references in the third, attach actual code snippets in the fourth, and compare all five. This costs more up front but saves hundreds of credits and days of iteration later. 6. When you get stuck, follow this four-step debugging sequence. First, let the tool try to fix it. Second, add console logs to track what’s happening and paste the output back to the agent. Third, bring in an external tool like Codex for deeper diagnosis. Fourth, revert a few steps and try a cleaner prompt. 7. Use rules files to make your findings permanent. After solving a problem, ask the AI how to prompt it better next time, and then add that guidance to your rules.md file. 8. Design and taste are becoming increasingly valuable in the AI era. When everyone can produce good-enough output instantly, the people who stand out are those who understand emotional design decisions—fonts, spacing, micro-interactions—that AI can’t yet replicate well. Invest time in studying elite design work. 9. Build in public to land vibe-coding jobs. Lazar got hired at Lovable because he was already shipping projects, teaching on YouTube, and sharing knowledge on LinkedIn. Hires at Lovable submitted Lovable apps instead of resumes. The fastest path to a professional vibe-coding role is doing the job before anyone pays you for it.

  • View profile for Satyam Jyottsana Gargee

    Software engineer | AI & Tech | LinkedIn Top Voice 2025 | Ex-Microsoft | walmart | 260k+ community | Featured on Time Square | Josh Talk speaker

    211,836 followers

    𝐇𝐨𝐰 𝐦𝐮𝐜𝐡 𝐃𝐒𝐀 𝐢𝐬 𝐞𝐧𝐨𝐮𝐠𝐡 𝐭𝐨 𝐜𝐫𝐚𝐜𝐤 𝐌𝐢𝐜𝐫𝐨𝐬𝐨𝐟𝐭, 𝐆𝐨𝐨𝐠𝐥𝐞 𝐨𝐫 𝐖𝐚𝐥𝐦𝐚𝐫𝐭? This is the most common DM which I get from juniors are, "Ma’am, I’ve solved 300+ questions but still can’t solve new ones. How many do I really need to do?" When I started, I had the same doubt. Some seniors said 300 questions, others said 500+ to be safe. So I rushed to hit those numbers. But here’s the truth it’s not about the count, it’s about the patterns. Once you master patterns, every new problem feels familiar. Here are the 15 patterns you must know for placements: 1. 𝐓𝐰𝐨 𝐏𝐨𝐢𝐧𝐭𝐞𝐫 𝐓𝐞𝐜𝐡𝐧𝐢𝐪𝐮𝐞 – Solve pair/relationship problems in arrays/linked lists. 2. 𝐒𝐥𝐢𝐝𝐢𝐧𝐠 𝐖𝐢𝐧𝐝𝐨𝐰 – Efficiently handle subarray/substring problems. 3. 𝐇𝐚𝐬𝐡𝐢𝐧𝐠 / 𝐅𝐫𝐞𝐪𝐮𝐞𝐧𝐜𝐲 𝐂𝐨𝐮𝐧𝐭𝐢𝐧𝐠 – O(1) lookups for counts, duplicates, mapping. 4. 𝐏𝐫𝐞𝐟𝐢𝐱 𝐒𝐮𝐦𝐬 – Answer range queries fast. 5. 𝐁𝐢𝐧𝐚𝐫𝐲 𝐒𝐞𝐚𝐫𝐜𝐡 (𝐚𝐧𝐝 𝐯𝐚𝐫𝐢𝐚𝐧𝐭𝐬) – For sorted arrays or monotonic conditions. 6. 𝐆𝐫𝐞𝐞𝐝𝐲 – Local choices that lead to global solutions. 7. 𝐃𝐲𝐧𝐚𝐦𝐢𝐜 𝐏𝐫𝐨𝐠𝐫𝐚𝐦𝐦𝐢𝐧𝐠 – Break down overlapping subproblems. 8. 𝐁𝐚𝐜𝐤𝐭𝐫𝐚𝐜𝐤𝐢𝐧𝐠 – Explore all possibilities (subsets, permutations). 9. 𝐁𝐅𝐒 – Shortest paths, level-by-level traversals. 10. 𝐃𝐅𝐒 – Explore all paths, detect cycles. 11. 𝐇𝐞𝐚𝐩 / 𝐓𝐨𝐩-𝐊 – Manage largest/smallest efficiently. 12. 𝐌𝐞𝐫𝐠𝐞 𝐈𝐧𝐭𝐞𝐫𝐯𝐚𝐥𝐬 – Handle overlaps in schedules. 13. 𝐔𝐧𝐢𝐨𝐧-𝐅𝐢𝐧𝐝 – Manage connectivity in graphs. 14. 𝐓𝐫𝐢𝐞 – Prefix-based search and storage. 15. 𝐌𝐨𝐧𝐨𝐭𝐨𝐧𝐢𝐜 𝐒𝐭𝐚𝐜𝐤 / 𝐐𝐮𝐞𝐮𝐞 – Solve next/previous greater/smaller problems. And remember, DSA is not a sprint, it is a marathon. Rejections will happen, and that is normal. But every attempt makes you sharper, and every failure teaches you a pattern in life too. #DSA #Placements #CodingInterviews #ProblemSolving #CareerAdvice

  • View profile for Saumya Awasthi

    Senior Software Engineer | AI & Tech Content Creator | Career Growth Storyteller | Featured in Times Square | Open to Collabs 🤝

    345,826 followers

    Most people don’t fail in DSA because it’s hard. They fail because they try to learn everything instead of learning the right patterns. If you’re a fresher preparing for coding interviews, stop collecting questions. Start mastering patterns. Here’s the exact roadmap I recommend 👇 1️⃣ Arrays Core patterns you must know: • Two Pointers • Sliding Window (fixed and variable) • Prefix Sum • Kadane’s Algorithm • Hashing / Frequency Map • Sorting + Greedy • Cyclic Sort • Binary Search 2️⃣ Linked Lists Core patterns: • Fast & Slow Pointer • Dummy Node • Reversal (entire list / k-group) • Merge Lists • Pointer Rewiring 3️⃣ Stack & Queue Core patterns: • Monotonic Stack • Monotonic Queue • Stack for Previous / Next Greater • Sliding Window + Deque 4️⃣ Trees & Graphs Core patterns: • DFS (pre / in / post order) • BFS (level order) • Recursion • Backtracking on Trees • Dijkstra • Topological Sort • Union Find 5️⃣ Advanced Patterns • Binary Search on Answer • Greedy • Dynamic Programming ◦ 0/1 Knapsack ◦ Unbounded Knapsack ◦ DP on Strings • Heap (Top K, Merge K) • Bit Manipulation You don’t need 1000 problems. You need clarity on these patterns. Once you understand the pattern, 10 different questions start looking the same. That’s when preparation becomes smart. If you’re preparing for placements or switching jobs, save this post and follow for more such content ❤️

  • View profile for Deeksha Pandey

    SWE III at Google | Building scalable AI systems | Tech Creator | Open to collaborate

    255,280 followers

    Top 5 Must-Know DSA Patterns👇🏻👇🏻 DSA problems often follow recurring patterns. Mastering these patterns can make problem-solving more efficient and help you ace coding interviews. Here’s a quick breakdown: 1. Sliding Window • Use Case: Solves problems involving contiguous subarrays or substrings. • Key Idea: Slide a window over the data to dynamically track subsets. • Examples: • Maximum sum of subarray of size k. • Longest substring without repeating characters. 2. Two Pointers • Use Case: Optimizes array problems involving pairs or triplets of elements. • Key Idea: Use two pointers to traverse from opposite ends or incrementally. • Examples: • Pair with target sum in a sorted array. • Trapping rainwater problem. 3. Binary Search • Use Case: Efficiently solves problems with sorted data or requiring optimization. • Key Idea: Repeatedly halve the search space to narrow down the solution. • Examples: • Find an element in a sorted array. • Search in a rotated sorted array. 4. Dynamic Programming (DP) • Use Case: Handles problems with overlapping subproblems and optimal substructure. • Key Idea: Build solutions iteratively using a table to store intermediate results. • Examples: • 0/1 Knapsack problem. • Longest common subsequence. 5. Backtracking • Use Case: Solves problems involving all possible combinations, subsets, or arrangements. • Key Idea: Incrementally build solutions and backtrack when a condition is not met. • Examples: • N-Queens problem. • Sudoku solver. Why These Patterns? By focusing on patterns, you can identify the right approach quickly, saving time and improving efficiency in problem-solving.

  • View profile for Ryan Mitchell

    O'Reilly / Wiley Author | LinkedIn Learning Instructor | Principal Software Engineer @ GLG

    30,296 followers

    I’ve been working on a massive prompt that extracts structured data from unstructured text. It's effectively a program, developed over the course of weeks, in plain English. Each instruction is precise. The output format is strict. The logic flows. It should Just Work™. And the model? Ignores large swaths of it. Not randomly, but consistently and stubbornly. This isn't a "program," it's a probability engine with auto-complete. This is because LLMs don’t "read" like we do, or execute prompts like a program does. They run everything through the "attention mechanism," which mathematically weighs which tokens matter in relation to others. Technically speaking: Each token is transformed into a query, key, and value vector. The model calculates dot products between the query vector and all key vectors to assign weights. Basically: "How relevant is this other token to what I’m doing right now?" Then it averages the values using those weights and moves on. No state. No memory. Just a rolling calculation over a sliding window of opaquely-chosen context. It's kind of tragic, honestly. You build this beautifully precise setup, but because your detailed instructions are buried in the middle of a long prompt -- or phrased too much like background noise -- they get low scores. The model literally pays less attention to them. We thought we were vibe coding, but the real vibe coder was the LLM all along! So how to fix it? Don’t just write accurate instructions. Write ATTENTION-WORTHY ones. - 🔁 Repeat key patterns. Repetition increases token relevance, especially when you're relying on specific phrasing to guide the model's output. - 🔝 Push constraints to the top. Instructions buried deep in the prompt get lower attention scores. Front-load critical rules so they have a better chance of sticking. - 🗂️ Use structure to force salience. Consistent headers, delimiters, and formatting cues help key sections stand out. Markdown, line breaks, and even ALL CAPS (sparingly) can help direct the model's focus to what actually matters. - ✂️ Cut irrelevant context. The less junk in the prompt, the more likely your real instructions are to be noticed and followed. You're not teaching a model. You're gaming a scoring function.

  • View profile for Andreas Sjostrom
    Andreas Sjostrom Andreas Sjostrom is an Influencer

    LinkedIn Top Voice | AI Agents | Robotics I Vice President at Capgemini's Applied Innovation Exchange | Author | Speaker | San Francisco | Palo Alto

    14,211 followers

    In the last few months, I have explored LLM-based code generation, comparing Zero-Shot to multiple types of Agentic approaches. The approach you choose can make all the difference in the quality of the generated code. Zero-Shot vs. Agentic Approaches: What's the Difference? ⭐ Zero-Shot Code Generation is straightforward: you provide a prompt, and the LLM generates code in a single pass. This can be useful for simple tasks but often results in basic code that may miss nuances, optimizations, or specific requirements. ⭐ Agentic Approach takes it further by leveraging LLMs in an iterative loop. Here, different agents are tasked with improving the code based on specific guidelines—like performance optimization, consistency, and error handling—ensuring a higher-quality, more robust output. Let’s look at a quick Zero-Shot example, a basic file management function. Below is a simple function that appends text to a file: def append_to_file(file_path, text_to_append): try: with open(file_path, 'a') as file: file.write(text_to_append + '\n') print("Text successfully appended to the file.") except Exception as e: print(f"An error occurred: {e}") This is an OK start, but it’s basic—it lacks validation, proper error handling, thread safety, and consistency across different use cases. Using an agentic approach, we have a Developer Lead Agent that coordinates a team of agents: The Developer Agent generates code, passes it to a Code Review Agent that checks for potential issues or missing best practices, and coordinates improvements with a Performance Agent to optimize it for speed. At the same time, a Security Agent ensures it’s safe from vulnerabilities. Finally, a Team Standards Agent can refine it to adhere to team standards. This process can be iterated any number of times until the Code Review Agent has no further suggestions. The resulting code will evolve to handle multiple threads, manage file locks across processes, batch writes to reduce I/O, and align with coding standards. Through this agentic process, we move from basic functionality to a more sophisticated, production-ready solution. An agentic approach reflects how we can harness the power of LLMs iteratively, bringing human-like collaboration and review processes to code generation. It’s not just about writing code; it's about continuously improving it to meet evolving requirements, ensuring consistency, quality, and performance. How are you using LLMs in your development workflows? Let's discuss!

Explore categories