Hammer Dev was excited to attend an important Microsoft event in NYC last week, along with what looked to be about 2,000 other IT professionals. This was a major stop on a country-wide tour to spread the word on their products, service offerings, and mission roadmap related to Microsoft’s favorite topic: AI. The wide variety of breakout sessions, workshops, and demos offered were geared toward both a technical and developer audience.
Of all the great content and topics presented, one in particular struck me as a major move forward:
Data and Microsoft’s roadmap for Fabric to be “the data platform for AI” was becoming increasingly evident and played a major role in just about every discussion and demo. The AI Tour was easily just as much about data as it was about AI – and further impressing that a proper governed data estate is needed in order to adopt and proceed with any AI implementation.
While there were no real “feature drops” from this event, these two topics also prompted a lot of excitement and enthusiasm:
Microsoft Fabric Mirroring
Announced during Ignite, Fabric is making it more attainable to procure and manage a true Data Mesh environment within your company’s unified data estate. The necessity to copy data from one environment to another is no longer a requirement for your organization’s Data Lake House – though there are still plenty of real-world reasons why you would still architect such a Data Lake House, such as mitigation of business hours stress on an organization’s key operational source system, for instance. The fact that you can not only incorporate any data source (even Snowflake) into your Fabric data estate and not only query it in real-time along with your other data source assets, but also maintain and add AI-backed data governance monitoring is truly amazing. You can have a data mesh architecture while still treating your unified data platform like a traditional Data Warehouse / Lakehouse!
During the Microsoft AI Tour, presenters were able to showcase newly added data sources to the list of those supporting Mirroring: Azure PostgreSQL, on-premises SQL Server, Azure MySQL, and MongoDB.
Microsoft Power Platform Copilot API connectivity
One breakout session performed a very impressive showcase of a new Power Platform Copilot feature (formerly called Power Virtual Agents, now with GenAI included). In theory, any REST API that supports GET commands can be integrated into a Custom Copilot and act just like any other data source. Part of the “smarts” of this feature is the ability to decipher an API’s Swagger document to understand where best to find information by domain entity or object.
The presenters provided a very impressive from-scratch demo – all in Lo-Code / No-Code – that established a new Custom Copilot and integrated it with a fictional third-party ERP API. They then finished their demonstration by deploying it as both a standalone Power Apps chat bot, and also a Teams chat bot app.
Exciting News, With a Note of Caution
As fascinating as this feature was, it also caused me great pause. I have been interfacing / integrating with custom and third-party APIs for years. APIs, if not properly architected and planned for, share the same concerns and considerations as any data source. If proper governance, security, and entitlements are not enforced, you can expose an entire business system (such as an ERP) to groups of users that should not have any of that access, with as little as thirty minutes’ worth of Lo-Code / No-Code work and deployment.
The key here is that no matter how attainable Microsoft (and others) makes AI and business intelligence to almost any organization, formal and structured planning is absolutely crucial to any AI / BI journey – even if you believe your corporate data estate is well administered and mature. Establishing an AI Adoption Roadmap via a detailed assessment effort should be the first step to every organization’s path to true AI and transformative data insights.
Need help managing your data estate? Let’s talk.