Skip to main content

Integrating AI Code Reviews in Azure DevOps & GitHub

· 4 min read
Craig Dempsey
Cloud Devops Engineer @ Digital Reflections

CodeRabbit is an AI-powered code review tool designed to streamline and enhance the code review process. It integrates seamlessly with platforms like GitHub, GitLab, and Azure DevOps, providing context-aware, line-by-line feedback on pull requests within minutes. By leveraging advanced AI models, CodeRabbit not only identifies potential issues such as bugs, security vulnerabilities, and performance bottlenecks but also offers actionable suggestions to improve code quality and maintainability. Its adaptive learning capabilities allow it to evolve with your team's coding standards and preferences, ensuring consistent and efficient reviews over time.

Beyond traditional code analysis, CodeRabbit offers features like auto-generated pull request summaries, sequence diagrams, and real-time chat interactions within the pull request interface. Developers can engage in natural language conversations with the AI to clarify suggestions or request further assistance. Additionally, CodeRabbit supports integration with tools like Jira and Linear, facilitating seamless issue tracking and project management. With options for both SaaS and self-hosted deployments, and a commitment to data privacy through ephemeral review environments and SOC2 Type II certification, CodeRabbit provides a comprehensive solution for modern development teams aiming to accelerate their workflows and elevate code quality.

Azure OpenAI with LibreChat for Private and Cost-Effective AI Chatbots

· 11 min read
Craig Dempsey
Cloud Devops Engineer @ Digital Reflections

AI chatbots are now essential for businesses, developers, and tech enthusiasts. While OpenAI’s ChatGPT is a powerful hosted solution, it has drawbacks such as ongoing subscription costs, privacy concerns, and limited customization options. What if you could run a powerful AI chatbot on your own infrastructure while leveraging the scalability and reliability of cloud-hosted large language models (LLMs)?

This is where LibreChat comes in. LibreChat is an open-source, self-hosted chatbot interface that provides a familiar ChatGPT-style UI while allowing you to connect to various AI models, including OpenAI's API, Azure OpenAI, and even local LLMs. By integrating LibreChat with Azure OpenAI, you get the best of both worlds—a customizable, private chatbot without the need to invest in expensive local hardware.

Deploying Azure OpenAI via Bicep: Key Considerations & Lab Setup

· 15 min read
Craig Dempsey
Cloud Devops Engineer @ Digital Reflections

Introduction​

Azure OpenAI enables organizations to leverage powerful AI models such as GPT-4o, o3-mini, and Whisper for a variety of use cases, from chatbots to code generation and beyond. However, deploying Azure OpenAI via Infrastructure as Code (IaC) using Bicep requires careful planning to ensure a scalable, cost-effective, and secure deployment.

In this article, we’ll set up a simple lab environment to explore the key concepts behind deploying Azure OpenAI with Bicep. This will help us gain a deeper understanding of the service’s deployment types, regional availability, quota limitations, and model options. By the end, we’ll have a working Azure OpenAI deployment with a few models to experiment with helping us plan more advanced deployments in the future.

Taming Azure Firewall Policies with Bicep: A Battle Against Nested Loops

· 12 min read
Craig Dempsey
Cloud Devops Engineer @ Digital Reflections

Deploying Azure Firewalls with IP Groups and Firewall Policies using Azure Verified Modules (AVM) sounded straightforward—until I hit a wall with Bicep’s nested loop limitations. What followed was a deep dive into dependency chains, AVM quirks, and creative workarounds. Here’s how I tamed the beast, and how you can too.