106 stories
·
1 follower

ChatGPT Gets a Computer

1 Comment

Ten years ago (from last Saturday) I launched Stratechery with an image of sailboats:

A picture of Sailboats

A simple image. Two boats, and a big ocean. Perhaps it’s a race, and one boat is winning — until it isn’t, of course. Rest assured there is breathless coverage of every twist and turn, and skippers are alternately held as heroes and villains, and nothing in between.

Yet there is so much more happening. What are the winds like? What have they been like historically, and can we use that to better understand what will happen next? Is there a major wave just off the horizon that will reshape the race? Are there fundamental qualities in the ships themselves that matter far more than whatever skipper is at hand? Perhaps this image is from the America’s Cup, and the trailing boat is quite content to mirror the leading boat all the way to victory; after all, this is but one leg in a far larger race.

It’s these sorts of questions that I’m particularly keen to answer about technology. There are lots of (great!) sites that cover the day-to-day. And there are some fantastic writers who divine what it all means. But I think there might be a niche for context. What is the historical angle on today’s news? What is happening on the business side? Where is value being created? How does this translate to normals?

ChatGPT seems to affirm that I have accomplished my goal; Mike Conover ran an interesting experiment where he asked ChatGPT to identify the author of my previous Article, The End of Silicon Valley (Bank), based solely on the first four paragraphs:1

Conover asked ChatGPT to expound on its reasoning:

ChatGPT was not, of course, expounding on its reasoning, at least in a technical sense: ChatGPT has no memory; rather, when Conover asked the bot to explain what it meant his question included all of the session’s previous questions and answers, which provided the context necessary for the bot to simulate an ongoing conversation, and then statistically predict the answer, word-by-word, that satisfied the query.

This observation of how ChatGPT works is often wielded by those skeptical about assertions of intelligence; sure, the prediction is impressive, and nearly always right, but it’s not actually thinking — and besides, it’s sometimes wrong.

Prediction and Hallucination

In 2004, Jeff Hawkins, who was at that point most well-known for being the founder of Palm and Handspring, released a book with Sandra Blakeslee called On Intelligence; the first chapter is about Artificial Intelligence, which Hawkins declared to be a flawed construct:

Computers and brains are built on completely different principles. One is programmed, one is self-learning. One has to be perfect to work at all, one is naturally flexible and tolerant of failures. One has a central processor, one has no centralized control. The list of differences goes on and on. The biggest reason I thought computers would not be intelligent is that I understood how computers worked, down to the level of the transistor physics, and this knowledge gave me a strong intuitive sense that brains and computers were fundamentally different. I couldn’t prove it, but I knew it as much as one can intuitively know anything.

Over the rest of book Hawkins laid out a theory of intelligence that he has continued to develop over the last two decades; last year he published A Thousand Brains: A New Theory of Intelligence, that distilled the theory to its essence:

The brain creates a predictive model. This just means that the brain continuously predicts what its inputs will be. Prediction isn’t something that the brain does every now and then; it is an intrinsic property that never stops, and it serves an essential role in learning. When the brain’s predictions are verified, that means the brain’s model of the world is accurate. A mis-prediction causes you to attend to the error and update the model.

Hawkins theory is not, to the best of my knowledge, accepted fact, in large part because it’s not even clear how it would be proven experimentally. It is notable, though, that the go-to dismissal of ChatGPT’s intelligence is, at least in broad strokes, exactly what Hawkins says intelligence actually is: the ability to make predictions.

Moreover, as Hawkins notes, this means sometimes getting things wrong. Hawkins writes in A Thousand Brains:

The model can be wrong. For example, people who lose a limb often perceive that the missing limb is still there. The brain’s model includes the missing limb and where it is located. So even though the limb no longer exists, the sufferer perceives it and feels that it is still attached. The phantom limb can “move” into different positions. Amputees may say that their missing arm is at their side, or that their missing leg is bent or straight. They can feel sensations, such as an itch or pain, located at particular locations on the limb. These sensations are “out there” where the limb is perceived to be, but, physically, nothing is there. The brain’s model includes the limb, so, right or wrong, that is what is perceived…

A false belief is when the brain’s model believes that something exists that does not exist in the physical world. Think about phantom limbs again. A phantom limb occurs because there are columns in the neocortex that model the limb. These columns have neurons that represent the location of the limb relative to the body. Immediately after the limb is removed, these columns are still there, and they still have a model of the limb. Therefore, the sufferer believes the limb is still in some pose, even though it does not exist in the physical world. The phantom limb is an example of a false belief. (The perception of the phantom limb typically disappears over a few months as the brain adjusts its model of the body, but for some people it can last years.)

This is an example of “a perception in the absence of an external stimulus that has the qualities of a real perception”; that quote is from the Wikipedia page for hallucination. “Hallucination (artificial intelligence)” has its own Wikipedia entry:

In artificial intelligence (AI), a hallucination or artificial hallucination (also occasionally called delusion) is a confident response by an AI that does not seem to be justified by its training data. For example, a hallucinating chatbot with no knowledge of Tesla’s revenue might internally pick a random number (such as “$13.6 billion”) that the chatbot deems plausible, and then go on to falsely and repeatedly insist that Tesla’s revenue is $13.6 billion, with no sign of internal awareness that the figure was a product of its own imagination.

Such phenomena are termed “hallucinations”, in analogy with the phenomenon of hallucination in human psychology. Note that while a human hallucination is a percept by a human that cannot sensibly be associated with the portion of the external world that the human is currently directly observing with sense organs, an AI hallucination is instead a confident response by an AI that cannot be grounded in any of its training data. AI hallucination gained prominence around 2022 alongside the rollout of certain large language models (LLMs) such as ChatGPT. Users complained that such bots often seemed to “sociopathically” and pointlessly embed plausible-sounding random falsehoods within its generated content. Another example of hallucination in artificial intelligence is when the AI or chatbot forget that they are one and claim to be human.

Like Sydney, for example.

The Sydney Surprise

It has been six weeks now, and I still maintain that my experience with Sydney was the most remarkable computing experience of my life; what made my interaction with Sydney so remarkable was that it didn’t feel like I was interacting with a computer at all:

I am totally aware that this sounds insane. But for the first time I feel a bit of empathy for Lemoine. No, I don’t think that Sydney is sentient, but for reasons that are hard to explain, I feel like I have crossed the Rubicon. My interaction today with Sydney was completely unlike any other interaction I have had with a computer, and this is with a primitive version of what might be possible going forward.

Here is another way to think about hallucination: if the goal is to produce a correct answer like a better search engine, then hallucination is bad. Think about what hallucination implies though: it is creation. The AI is literally making things up. And, in this example with LaMDA, it is making something up to make the human it is interacting with feel something. To have a computer attempt to communicate not facts but emotions is something I would have never believed had I not experienced something similar.

Computers are, at their core, incredibly dumb; a transistor, billions of which lie at the heart of the fastest chips in the world, are simple on-off switches, the state of which is represented by a 1 or a 0. What makes them useful is that they are dumb at incomprehensible speed; the Apple A16 in the current iPhone turns transistors on and off up to 3.46 billion times a second.

The reason why these 1s and 0s can manifest themselves in your reading this Article has its roots in philosophy, as explained in this wonderful 2016 article by Chris Dixon entitled How Aristotle Created the Computer:

The history of computers is often told as a history of objects, from the abacus to the Babbage engine up through the code-breaking machines of World War II. In fact, it is better understood as a history of ideas, mainly ideas that emerged from mathematical logic, an obscure and cult-like discipline that first developed in the 19th century. Mathematical logic was pioneered by philosopher-mathematicians, most notably George Boole and Gottlob Frege, who were themselves inspired by Leibniz’s dream of a universal “concept language,” and the ancient logical system of Aristotle.

Dixon’s article is about the history of mathematical logic; Dixon notes:

Mathematical logic was initially considered a hopelessly abstract subject with no conceivable applications. As one computer scientist commented: “If, in 1901, a talented and sympathetic outsider had been called upon to survey the sciences and name the branch which would be least fruitful in [the] century ahead, his choice might well have settled upon mathematical logic.” And yet, it would provide the foundation for a field that would have more impact on the modern world than any other.

It is mathematical logic that reduces all of math to a series of logical statements, which allows them to be computed using transistors; again from Dixon:

[George] Boole’s goal was to do for Aristotelean logic what Descartes had done for Euclidean geometry: free it from the limits of human intuition by giving it a precise algebraic notation. To give a simple example, when Aristotle wrote:

All men are mortal.

Boole replaced the words “men” and “mortal” with variables, and the logical words “all” and “are” with arithmetical operators:

x = x * y

Which could be interpreted as “Everything in the set x is also in the set y”…

[Claude] Shannon’s insight was that Boole’s system could be mapped directly onto electrical circuits. At the time, electrical circuits had no systematic theory governing their design. Shannon realized that the right theory would be “exactly analogous to the calculus of propositions used in the symbolic study of logic.” He showed the correspondence between electrical circuits and Boolean operations in a simple chart:

Claude Shannon's circuit interpretation table

This correspondence allowed computer scientists to import decades of work in logic and mathematics by Boole and subsequent logicians. In the second half of his paper, Shannon showed how Boolean logic could be used to create a circuit for adding two binary digits.

Claude Shannon's circuit design

By stringing these adder circuits together, arbitrarily complex arithmetical operations could be constructed. These circuits would become the basic building blocks of what are now known as arithmetical logic units, a key component in modern computers.

The implication of this approach is that computers are deterministic: if circuit X is open, then the proposition represented by X is true; 1 plus 1 is always 2; clicking “back” on your browser will exit this page. There are, of course, a huge number of abstractions and massive amounts of logic between an individual transistor and any action we might take with a computer — and an effectively infinite number of places for bugs — but the appropriate mental model for a computer is that they do exactly what they are told (indeed, a bug is not the computer making a mistake, but rather a manifestation of the programmer telling the computer to do the wrong thing). Sydney, though, was not at all what Microsoft intended.

ChatGPT’s Computer

I’ve already mentioned Bing Chat and ChatGPT; on March 14 Anthropic released another AI assistant named Claude: while the announcement doesn’t say so explicitly, I assume the name is in honor of the aforementioned Claude Shannon.

This is certainly a noble sentiment — Shannon’s contributions to information theory broadly extend far beyond what Dixon laid out above — but it also feels misplaced: while technically speaking everything an AI assistant is doing is ultimately composed of 1s and 0s, the manner in which they operate is emergent from their training, not proscribed, which leads to the experience feeling fundamentally different from logical computers — something nearly human — which takes us back to hallucinations; Sydney was interesting, but what about homework?

Here are three questions that GPT4 got wrong:

Questions GPT4 got wrong

All three of these examples come from Stephen Wolfram, who noted that there are some kinds of questions that large language models just aren’t well-suited to answer:

Machine learning is a powerful method, and particularly over the past decade, it’s had some remarkable successes—of which ChatGPT is the latest. Image recognition. Speech to text. Language translation. In each of these cases, and many more, a threshold was passed—usually quite suddenly. And some task went from “basically impossible” to “basically doable”.

But the results are essentially never “perfect”. Maybe something works well 95% of the time. But try as one might, the other 5% remains elusive. For some purposes one might consider this a failure. But the key point is that there are often all sorts of important use cases for which 95% is “good enough”. Maybe it’s because the output is something where there isn’t really a “right answer” anyway. Maybe it’s because one’s just trying to surface possibilities that a human—or a systematic algorithm—will then pick from or refine…

And yes, there’ll be plenty of cases where “raw ChatGPT” can help with people’s writing, make suggestions, or generate text that’s useful for various kinds of documents or interactions. But when it comes to setting up things that have to be perfect, machine learning just isn’t the way to do it—much as humans aren’t either.

And that’s exactly what we’re seeing in the examples above. ChatGPT does great at the “human-like parts”, where there isn’t a precise “right answer”. But when it’s “put on the spot” for something precise, it often falls down. But the whole point here is that there’s a great way to solve this problem—by connecting ChatGPT to Wolfram|Alpha and all its computational knowledge “superpowers”.

That’s exactly what OpenAI has done. From The Verge:

OpenAI is adding support for plug-ins to ChatGPT — an upgrade that massively expands the chatbot’s capabilities and gives it access for the first time to live data from the web.

Up until now, ChatGPT has been limited by the fact it can only pull information from its training data, which ends in 2021. OpenAI says plug-ins will not only allow the bot to browse the web but also interact with specific websites, potentially turning the system into a wide-ranging interface for all sorts of services and sites. In an announcement post, the company says it’s almost like letting other services be ChatGPT’s “eyes and ears.”

Stephen Wolfram’s Wolfram|Alpha is one of the official plugins, and now ChatGPT gets the above answers right — and quickly:2

ChatGPT gets the right answer from Wolfram|Alpha

Wolfram wrote in the post that requested this integration:

For decades there’s been a dichotomy in thinking about AI between “statistical approaches” of the kind ChatGPT uses, and “symbolic approaches” that are in effect the starting point for Wolfram|Alpha. But now—thanks to the success of ChatGPT—as well as all the work we’ve done in making Wolfram|Alpha understand natural language—there’s finally the opportunity to combine these to make something much stronger than either could ever achieve on their own.

The fact this works so well is itself a testament to what Assistant AI’s are, and are not: they are not computing as we have previously understood it; they are shockingly human in their way of “thinking” and communicating. And frankly, I would have had a hard time solving those three questions as well — that’s what computers are for! And now ChatGPT has a computer of its own.

Opportunity and Risk

One implication of this plug-in architecture is that someone needs to update Wikipedia: the hallucination example above is now moot, because ChatGPT isn’t making up revenue numbers — it’s using its computer:

Tesla's revenue in ChatGPT

This isn’t perfect — for some reason Wolfram|Alpha’s data is behind, but it did get the stock price correct:

Tesla's stock price in ChatGPT

Wolfram|Alpha isn’t the only plugin, of course: right now there are 11 plugins in categories like Travel (Expedia and Kayak), restaurant reservations (OpenTable), and Zapier, which opens the door to 5,000+ other apps (the plugin to search the web isn’t currently available); they are all presented in what is being called the “Plugin store.” The Instacart integration was particularly delightful:

ChatGPT adds a shopping list to Instacart

Here’s where the link takes you:

My ChatGPT-created shopping cart

ChatGPT isn’t actually delivering me groceries — but it’s not far off! One limitation is I actually had to select the Instacart plugin; you can only have 3 loaded at a time. Still, that is a limitation that will be overcome, and it seems certain that there will be many more plugins to come; one could certainly imagine OpenAI both allowing customers to choose and also selling default plugin status for certain categories on an auction basis, using the knowledge it gains about users.

This is also rather scary, and here I hope that Hawkins is right in his theory. He writes in A Thousand Brains in the context of AI risk:

Intelligence is the ability of a system to learn a model of the world. However, the resulting model by itself is valueless, emotionless, and has no goals. Goals and values are provided by whatever system is using the model. It’s similar to how the explorers of the sixteenth through the twentieth centuries worked to create an accurate map of Earth. A ruthless military general might use the map to plan the best way to surround and murder an opposing army. A trader could use the exact same map to peacefully exchange goods. The map itself does not dictate these uses, nor does it impart any value to how it is used. It is just a map, neither murderous nor peaceful. Of course, maps vary in detail and in what they cover. Therefore, some maps might be better for war and others better for trade. But the desire to wage war or trade comes from the person using the map.

Similarly, the neocortex learns a model of the world, which by itself has no goals or values. The emotions that direct our behaviors are determined by the old brain. If one human’s old brain is aggressive, then it will use the model in the neocortex to better execute aggressive behavior. If another person’s old brain is benevolent, then it will use the model in the neocortex to better achieve its benevolent goals. As with maps, one person’s model of the world might be better suited for a particular set of aims, but the neocortex does not create the goals.

The old brain Hawkins references is our animal brain, the part that drives emotions, our drive for survival and procreation, and the subsystems of our body; it’s the neocortex that is capable of learning and thinking and predicting. Hawkins’ argument is that absent the old brain our intelligence has no ability to act, either in terms of volition or impact, and that machine intelligence will be similarly benign; the true risk of machine intelligence is the intentions of the humans that wield it.

To which I say, we shall see! I agree with Tyler Cowen’s argument about Existential Risk, AI, and the Inevitable Turn in Human History: AI is coming, and we simply don’t know what the outcomes will be, so our duty is to push for the positive outcome in which AI makes life markedly better. We are all, whether we like it or not, enrolled in something like the grand experiment Hawkins has long sought — the sailboats are on truly uncharted seas — and whether or not he is right is something we won’t know until we get to whatever destination awaits.


  1. GPT-4 was trained on Internet data up to 2021, so did not include this Article 

  2. The Mercury question is particularly interesting; you can see the “conversation” between ChatGPT and Wolfram|Alpha here, here, here, and here as it negotiates exactly what it is asking for. 

Read the whole story
spongbeaux
3 days ago
reply
Brilliant, staggering and chilling.
Share this story
Delete

Alonso case shows how easily weak F1 rules can be undermined

1 Comment

The flip-flopping over Fernando Alonso's Saudi Arabian Grand Prix penalty was an example of how convoluted F1 rules can be. That saga also proved how easily a system that relies on 'spirit' and 'intent' falls apart if a team decides to challenge it [...]

Read More...

The post Alonso case shows how easily weak F1 rules can be undermined appeared first on The Race.



Read the whole story
spongbeaux
8 days ago
reply
Spotted the rear jack touch but figure it's not work. No problem with the final interpretation here.
Share this story
Delete

Using Command Binding in Windows Forms apps to go Cross-Platform

2 Shares

Large line-of-business WinForms applications can often benefit from the use of the Model-View-ViewModel (MVVM) pattern to simplify maintenance, reuse, and unit testing. In this post, I’ll explain the important concepts and architectural patterns of MVVM including data binding and command binding. Then we will take a look at how you can leverage modern libraries, .NET 7 features, and Visual Studio tooling to efficiently modernize your WinForms applications. By the end, you will see how this approach sets your application up to go cross-platform with popular frameworks like .NET MAUI which can leverage the MVVM code across iOS, Android, Mac, and Windows.

Say Goodbye to Code-behind: WinForms meets MVVM & .NET MAUI

One of the features that has led WinForms to its overwhelming popularity as a RAD (rapid application development) tool is often, for large line-of-business apps, also its biggest architectural challenge: placing application logic as code behind UI-Element’s event handlers. As an example: The developer wants to display a database view after a mouse click on a pulldown menu entry. The pulldown menu is located in the main form of the application. This form is a class derived from System.Windows.Forms, so it has dependencies to the .NET WinForms runtime. This means every line of code put here can only be reused in a context of a, probably even only if the context of exactly this WinForms app. Typically for WinForms, user interaction with any UI elements triggers events. These events are then handled in corresponding event handler routines, and this approach is often called code-behind. So, the code behind the menu entry is triggered. In many cases, the form is now executing the logic for a domain-specific task that basically has nothing to do with the UI. For large line-of-business apps, this can quickly become an architectural nightmare.

In the case of this example, the code which is executed as a result of a user clicking a menu item contains many steps. The form connects to a database, sends a query to that database, then feeds the data, perhaps prepares it by performing additional calculations or pulling in additional data from other sources, and finally passes the resulting data set to another WinForms form which it then presents in a DataGridView. Code-behind application architectures in this way tend to easily mix domain-specific with UI-technical tasks. The resulting codebase is not only difficult to maintain, it is also hard to reuse in other scenarios, let alone easily support unit testing of the business logic.

Enter UI-Controller architecture

With the introduction of Windows Presentation Foundation (WPF) in 2008, a new design pattern emerged that set out to clean up the code-behind spaghetti architecture of previous UI stacks and introduced a clear separation of code based on the layer in the application: UI-relevant code remained alone in a view layer, domain-specific code migrated to a ViewModel layer, and additional data transport classes, which were used to transport data from a database or an alike data source and were processed in the view model, were simply called Model classes.

The name of the pattern became Model-View-ViewModel, or MVVM for short. The processing of data and the execution of the domain-specific code are reserved for the ViewModel. As an example, the business logic for an accounting application only runs in the context of one (or more) ViewModel assemblies. Projects that build these assemblies have no references to concrete UI stacks like WinForms, WPF or .NET MAUI. Rather, they provide an invisible or virtual UI only through properties, which are then linked to the UI at runtime via data binding.

This technique is not entirely new for WinForms apps. Abstraction via data binding was already common practice in the VB6 days and became even more flexible with WinForms and the introduction of Typed DataSets. The abstraction of data and UI worked according to the same principle then as it does today: The various properties of a data source (in MVVM the ViewModel, in typical WinForms or VB6 architectures often a filtered and specially prepared view of the database directly) are bound to specific control elements of the UI. The Firstname property of a data source class is bound to the Text property of the FirstnameTextBox control. The JoinDate property of the same data source class is bound to the Value property of the JoinDateTimePicker control, and so on. Even the notifications of the data classes to the UI that data has changed worked in the first WinForms versions in the same way as is still the case today in WPF, WinUI or .NET MAUI: data classes implement an interface called INotifyPropertyChanged. This mandates an event called PropertyChanged for the data source classes. It is then the task of the data classes to determine in the respective setters of the property implementations whether the property value had really changed and, if this is the case, to trigger the corresponding event, specifying the property name. A typical implementation of a property in a class looks something like this:

    public class NotifyPropertyChangeDemo : INotifyPropertyChanged
    {
        // The event that is fired when a property changes.
        public event PropertyChangedEventHandler? PropertyChanged;

        // Backing field for the property.
        private string? _lastName;
        private string? _firstName;

        // The property that is being monitored.
        public string? LastName
        {
            get => _lastName;

            set
            {
                if (_lastName == value)
                {
                    return;
                }

                _lastName = value;

                // Notify the UI that the property has changed.
                // Using CallerMemberName at the call site will automatically fill in
                // the name of the property that changed.
                OnPropertyChanged();
            }
        }

        public string? FirstName
        {
            get => _firstName;

            set
            {
                if (_firstName == value)
                {
                    return;
                }

                _firstName = value;
                OnPropertyChanged();
            }
        }

        private void OnPropertyChanged([CallerMemberName] string propertyName = "") 
            => PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
    }

In summary, the general idea is to remote-control the actual UI through data binding. However, in contrast to the original thought coming from VB6 and the classical WinForms data binding, using a UI-Controller pattern like MVVM is more than a convenience to populate a complex form with data: While VB6 and classic WinForms applications typically only used populated data classes like Typed DataSets or Entity Framework model class instances, the ViewModel holds the data and encapsulates the business logic to prepare that data logically in such a way that practically the only task of the UI layer is to present the data in a user-friendly way.

UI-Controller Architecture in WinForms: To what end?

If you wanted to architect and develop a new application from scratch, you would most likely consider using a modern UI stack for your new development endeavors, and in many cases, that makes sense. If you were to start developing a new Windows app from scratch, a more modern stack like WinUI might be a meaningful alternative. Alternatively, a web-based front end and adapting technologies like Blazor or Angular/TypeScript also have merit. But it’s often not that easy in real life. Here is a list of points to consider, why adapting a UI-Controller architecture for new or existing WinForms apps makes sense:

  • Acceptance of UI paradigm changes in LOB apps: Users usually are trained not only in one particular business application, but also build muscle memory in how the UI is handled across an LOB app-suite in general. If LOB apps get rewritten for a new UI Stack, additional training for the users needs to be taken into account. Modernizing the core of a WinForms business app without changing the workflow for a big existing user base might be a requirement in certain scenarios.

  • Feasibility of re-writes of huge line-of-business apps: There are hundreds of thousands of line-of-business WinForms apps already out there. Simply discarding a huge, often several hundred-thousand lines of code business app is simply not feasible. It makes more sense to re-architect and modernize such an app step by step. The vast majority of development teams simply don’t have the time to keep supporting the actively in-use legacy app while developing the same app – modernized, based on a completely new technology – from scratch. Modernizing the app in place, step-by-step, and area by area is often the only viable alternative.

  • Unit tests for ViewModels: Introducing a UI-controller architecture into an existing LOB app opens up another advantage: Since UI-Controllers/ViewModels are not supposed to have dependencies to certain UI technologies, unit tests for such apps can be implemented much more easily. We cover this topic in more detail in the context of the sample app below.

  • Mobile Apps spin-offs: The necessity for spin-offs of mobile apps from a large WinForms line-of-business apps is a growing requirement for many businesses. With an architecture in which ViewModels can be shared and combined between for example WinForms and .NET MAUI, the development of such mobile apps can be streamlined with a lot of synergy effects. This topic is also covered in more detail in the context of the sample app.

  • Consolidating and modernizing backend technologies: Last but not at all least – with a proper architecture – consolidating onsite server-backends and migrating the backends of a LOB apps to Azure based cloud technologies is straight forward, and can be done as a migration project over time. The users do not lose their usual way of working and the modernization can happen mostly in the background without interruptions.

The sample apps of this blog post

You can find the sample app used in this blog post in the WinForms-Designer-Extensibility Github repo. It contains these areas:

  • The simplified scenarios used in this blog post to explain the UI-Controller/MVVM approach in WinForms with the new WinForms Command Binding features. You find this samples in the project MauiEdit.ViewModel in the solution folder Examples.
  • A bigger example (the WinForms Editor), showing the new WinForms features in the context of a real-world scenario.
  • A .NET MAUI-Project, adapting the exact same ViewModel used for the WinForms App for a .NET MAUI app. We are limiting the showcase here to Android for simplicity in this context.
  • A unit test project, which shows how using ViewModels enables you to write unit tests for a UI based on an UI-Controller abstraction like MVVM.

    Screenshot of the sample app in the solution explorer showing the 4 projects .NET MAUI, WinForms, UnitTest and ViewModel.

NOTE: This blog post is not an introduction to .NET MAUI. There are a lot of resources on the web, showing the basic approach of MVVM with XAML-based UI stacks, an Introduction to .NET MAUI, many examples, and lots of really good YouTube videos.

Animated gif which shows the WinForms and the Android version of the sample editor in action.

Command Binding for calling methods inside the UI-controller

While WinForms brought a pretty complete infrastructure for data binding along from the beginning, and data could also flow between the UI and the binding source in both directions, one aspect was missing. Up to now, there wasn’t a “bindable” way to connect business logic methods in the data source – or more accurately: the UI-Controller or ViewModel – with the UI. The goal with the separation of concerns is that a form or UserControl shouldn’t execute code which manipulates business data. It should leave that task completely to that UI Controller. Now, in the spirit of MVVM, the approach is to connect an element of the UI to the a method inside UI-Controller, so when the UI Element is engaged in the UI, the respective method in the Controller (the ViewModel) is executed. These methods, which are responsible for the domain-specific functionality inside a ViewModel, are called Commands. For linking these ViewModel commands with the user interface, MVVM utilizes Command Binding which is based on types implementing the ICommand interface. Those types can then be used in UI Controllers like ViewModels to define properties so they don’t hold the actual code to be executed, they rather point to the methods which get automatically executed when the UI Element the Command is bound to in the View gets engaged.

Note: Although ICommand is located in the Namespace System.Windows.Input which in principle points to namespaces used in the WPF context, ICommand doesn’t have any dependencies to any existing UI Stack in .NET or the .NET Framework. Rather, ICommand is used across all .NET based UI Stacks, including WPF, .NET MAUI, WinUI, UWP, and now also WinForms.

If you have a Command defined in your ViewModel, you also need Command properties in UI elements to bind against. To make this possible for the .NET 7 WinForms runtime, we made several changes to WinForms controls and components:

  • Introduced Command for ButtonBase: ButtonBase got a Command property of type ICommand. That means that the controls Button, CheckBox and RadioButton in WinForms also now have a Command property as will every WinForms control derived either from Button or ButtonBase.
  • Introduced BindableComponent: The WinForms .NET 7 runtime introduced BindableComponent. Before, components were not intrinsically bindable in WinForms, they were missing the necessary binding infrastructure. BindableComponent derives from Component and provides the infrastructure to create properties for a component, which can be bound.
  • Made ToolStripItem bindable: ToolStripItem no longer derives from Component but rather now from BindableComponent. This was necessary, since ToolStripButton and ToolStripMenuItem are desired candidates for command binding targets, but for that they need to be able to be bound to begin with.
  • Introduced Command for ToolStripItem: ToolStripItem also got a Command property of type ICommand. Although every component derived from ToolStripItem now has a Command property, especially ToolStripButton, ToolStripDropDownButton, ToolStripMenuItem and ToolStripSplitButton are ideal candidates for command binding due to their UI characteristics.
  • Introduced CommandParameter properties: For a simplified passing of Command parameters, both ButtonBase and ToolStripItem got a CommandParameter property. When a command gets executed, the content of CommandParameter is automatically passed to the command’s method in the UI-Controller/ViewModel.
  • Introduced public events for new properties: Both ButtonBased and ToolStripItem got the new events CommandCanExecuteChanged, CommandChanged and CommandParameterChanged. With these events the new properties are supported by the binding infrastructure of WinForms and make the those properties bindable.
  • Introduced (protected) OnRequestCommandExecute: Overriding OnRequestCommandExecute allows ButtonBase– or ToolStripItem-derived classes to intercept the request to execute a command when the user has clicked a bound command control or component. The derived control can then decide not to invoke the bound UI Controller’s or ViewModel’s command by simply not calling the base class’s method.

Controlling Command Execution and Command Availability by Utilizing Relay Commands

Since ICommand is just an interface which mandates properties and methods, a UI-Controller or ViewModel needs to use a concrete class instance, which implements the ICommand interface.

The ICommand interface has three methods:

  • Execute: This method is called when the command is invoked. It contain the domain specific the logic.
  • CanExecute: This method is called whenever the command’s context to execute changes. It returns a boolean value indicating the command’s availability.
  • CanExecuteChanged: This event is raised whenever the value of the CanExecute method changes.

    One typical implementation of a class implementing ICommand is called RelayCommand, and a simplified version would look like this:

    // Implementation of ICommand - simplified version not taking parameters into account.
    public class RelayCommand : ICommand
    {
        public event EventHandler? CanExecuteChanged;

        private readonly Action _commandAction;
        private readonly Func<bool>? _canExecuteCommandAction;

        public RelayCommand(Action commandAction, Func<bool>? canExecuteCommandAction = null)
        {
            ArgumentNullException.ThrowIfNull(commandAction, nameof(commandAction));

            _commandAction = commandAction;
            _canExecuteCommandAction = canExecuteCommandAction;
        }

        // Implicit interface implementations, since there will never be the necessity to call this 
        // method directly. It will always exclusively be called by the bound command control.
        bool ICommand.CanExecute(object? parameter)
            => _canExecuteCommandAction is null || _canExecuteCommandAction.Invoke();

        void ICommand.Execute(object? parameter)
            => _commandAction.Invoke();

        /// <summary>
        ///  Triggers sending a notification, that the command availability has changed.
        /// </summary>
        public void NotifyCanExecuteChanged()
            => CanExecuteChanged?.Invoke(this, EventArgs.Empty);
    }

When you bind the Command property of a control against a property of type RelayCommand (or any other implementation of ICommand), two things happen:

  • Whenever the user engages the control (by clicking or pressing the related shortcut key on the keyboard), the Action defined for the RelayCommand gets executed.
  • The control whose Command property is bound gets disabled when the execution context changes. The execution context is controlled by the CanExecute method of the RelayCommand. When it returns false, it indicates that the command cannot be executed. In that case, the control gets automatically disabled. When it returns true, the opposite happens: the control gets enabled.

Note: The execution context of a command is usually controlled by the UI Controller/ViewModel. It might be affected by the UI alright, but it’s the ultimate decision of the ViewModel to enable or disable a command based on some domain specific context. The sample code below will give a practical example how to to enable or disable a command’s execution context based on a user’s input.

Additional WinForms Binding Features in .NET 7

In addition to the new binding features, we added a new general data binding infrastructure feature on Control: For an easier cascading of data source population in nested Forms (for example a form holds UserControls which hold other UserControls which hold custom controls etc.), .NET 7 also introduced a DataContext property of type Object on Control. This property has ambient characteristics (like BackColor or Font). That means when you assign a data source instance to DataContext, that instance gets automatically delegated down to every child control of the form. Currently, the DataContext property does simply serve the purpose of a data source carrier, but does not affect any bindings directly. It is still the duty of the control to then delegate a new data source down to the respective BindingSource components, where it makes sense. Examples:

  • UserControls, which are used inside of Forms, utilize dedicated BindingSource components for internal binding scenarios. They all need to be taken from one central data source. This data source can now be handed down from the parent control via its DataContext property. When the DataContext property gets assigned to the form, not only will every child control of the controls collection of this parent have that same DataContext – as soon as the data source of the parent’s DataContext changes, all the children receive the DataContextChange event, so they know they can provide their BindingSource‘s DataSource properties with whatever updated data source their domain-specific context requires. We cover this scenario also in the sample app down below.
  • If you have implemented data binding scenarios with custom binding engines, you can also use DataContext to simplify binding scenarios and make them more robust. In derived controls, you can overwrite OnDataContextChanged to control the change notification of the DataContext, and you can use OnParentDataContextChanged to intercept notifications, when the parent control notifies its child controls that the DataContext property has changed.

Binding Commands and ViewModels with the WinForms out-of-proc Designer

Now, let’s put all of the puzzle pieces we’ve covered so far together, and apply them in a very simple WinForms App:

A WinForms form in the Designer with a CheckBox, a Button, a Label, serving as a sample MVVM view.

The idea here is to have a form which is controlled by a ViewModel, providing a Command and properties to bind against the CheckBox control and the Label. When the Button is engaged, the bound command in the ViewModel gets executed, changing the CommandResult property which states how often the Button has been clicked. Since CommandResult is bound against the Label, the UI reflects this result. The CheckBox in this scenario controls the availability of the command. When the CheckBox is checked, the command is available. Therefore, the button is enabled, since the command’s CanExecute method returns true. As soon as the CheckBox’ check state changes, the bound SampleCommandAvailability gets updated, and its setter gets executed. That in turn invokes the NotifyCanExecuteChanged method. That again triggers the execution of the command’s CanExecute method, which now returns the updated execution context and automatically updates the enabled state of the bound button so it becomes disabled.

The business logic for this application is completely implemented in the ViewModel. Except for what is in InitializeComponent to setup the form’s controls, the form only holds the code to setup the ViewModel as the form’s data source.

public class SimpleCommandViewModel : INotifyPropertyChanged
    {
        // The event that is fired when a property changes.
        public event PropertyChangedEventHandler? PropertyChanged;

        // Backing field for the properties.
        private bool _sampleCommandAvailability;
        private RelayCommand _sampleCommand;
        private string _sampleCommandResult = "* not invoked yet *";
        private int _invokeCount;

        public SimpleCommandViewModel()
        {
            _sampleCommand = new RelayCommand(ExecuteSampleCommand, CanExecuteSampleCommand);
        }

        /// <summary>
        ///  Controls the command availability and is bound to the CheckBox of the Form.
        /// </summary>
        public bool SampleCommandAvailability
        {
            get => _sampleCommandAvailability;
            set
            {
                if (_sampleCommandAvailability == value)
                {
                    return;
                }

                _sampleCommandAvailability = value;

                // When this property changes we need to notify the UI that the command availability has changed.
                // The command's availability is reflected through the button's enabled state, which is - 
                // because its command property is bound - automatically updated.
                _sampleCommand.NotifyCanExecuteChanged();

                // Notify the UI that the property has changed.
                OnPropertyChanged();
            }
        }

        /// <summary>
        ///  Command that is bound to the button of the Form.
        ///  When the button is clicked, the command is executed.
        /// </summary>
        public RelayCommand SampleCommand
        {
            get => _sampleCommand;
            set
            {
                if (_sampleCommand == value)
                {
                    return;
                }

                _sampleCommand = value;
                OnPropertyChanged();
            }
        }

        /// <summary>
        ///  The result of the command execution, which is bound to the Form's label.
        /// </summary>
        public string SampleCommandResult
        {
            get => _sampleCommandResult;
            set
            {
                if (_sampleCommandResult == value)
                {
                    return;
                }

                _sampleCommandResult = value;
                OnPropertyChanged();
            }
        }

        /// <summary>
        ///  Method that is executed when the command is invoked.
        /// </summary>
        public void ExecuteSampleCommand()
            => SampleCommandResult = $"Command invoked {_invokeCount++} times.";

        /// <summary>
        ///  Method that determines whether the command can be executed. It reflects the 
        ///  property CommandAvailability, which in turn is bound to the CheckBox of the Form.
        /// </summary>
        public bool CanExecuteSampleCommand()
            => SampleCommandAvailability;

        // Notify the UI that one of the properties has changed.
        private void OnPropertyChanged([CallerMemberName] string propertyName = "")
            => PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
    }

Setting up the form and the ViewModel through binding in WinForms is of course different than it is in typical XAML languages like WPF, UWP or .NET MAUI. But it’s pretty quick and 100% in the spirit of WinForms.

Note: Please keep in mind that the out-of-process Designer doesn’t support the Data Source Provider Service, which is responsible for providing the classic Framework Data Sources Window. The data binding blog post tells more about the background. So, in a .NET WinForms App at this point you cannot interactively create Typed DataSets with the Designer or add existing Typed DataSets as data sources through the Data Sources tool window. Keep also in mind that although there is no Designer support to add Typed DataSets as data sources, the out-of-process Designer for .NET does support binding against Typed DataSets of already defined data sources (which have been brought over from a .NET Framework application just by copying the files structure of a project).

The out-of-process Designer provides an easier alternative to hook up new Object Data Sources, which can be based on ViewModels as described here, but also based on classes which results from Entity Framework or EFCore. The Microsoft docs provide a neat guide how to work with EFCore and this new Feature to create a simple data bound WinForms application.

To set up the bindings for the ViewModel to the sample WinForms UI, follow these steps:

  1. Open and design the form in the WinForms out-of-process Designer according to the screenshot above.

  2. Select the Button control by clicking on it.

  3. In the Property Browser, scroll to the top, find the (DataBindings) property and open it.

  4. Find the row for the Command property, and open the Design Binding Picker by clicking the arrow-down button at the end of the cell.

    Screenshot showing the open Design Binding Picker to add a new object data source.

  5. In the Design Binding Picker, click on Add new Object Data Source.

  6. The Designer now shows the Add Object Data Source-Dialog. In this dialog, click on the class you want to add, and then click OK.

    Screenshot showing the new Add Object Data Source dialog.

  7. After closing the dialog, the new data source becomes available, and you can find the SampleCommand property in the Design Binding Picker and assign it to the Button’s Command property.

    Screenshot showing how to pick a command from a ViewModel in the Design Binding Picker.

  8. Select the CheckBox control in the form, and bind its Checked property to the ViewModel’s CommandAvailability property. Please note from the following screenshot: The previous step automatically inserted the required BindingSource component, which is in most cases common to use as a mediator for binding and syncing between several control instances (like the current line of a bound DataGridView and a detail view on the same Form). When you earlier picked the SampleCommand property directly from the ViewModel, the Designer created the BindingSource based on that pick and then selected that property from the BindingSource. Successive bindings on this form should now be picked directly from that BindingSource instance.

    Screenshot showing the binding of the CheckBox against CommandAvailability of the ViewModel.

  9. Select the Label control in the form, and bind its Text property to the ViewModel’s CommandResult property.

The only code now which needs to be run in the form directly, is the hook up of the ViewModel with the form using the BindingSource at runtime:

        protected override void OnLoad(EventArgs e)
        {
            base.OnLoad(e);
            simpleCommandViewModelBindingSource.DataSource = new SimpleCommandViewModel();
        }

Using the MVVM community toolkit for streamlined coding

For more complex scenarios, developing the domain-specific MVVM classes would require writing comparatively expensive boilerplate code. Typically, developers using XAML-based UI stacks like WPF or .NET MAUI use a library called MVVM Community Toolkit to utilize the many provided MVVM-base classes of that library, but also to let code generators do the writing of this kind of code wherever it makes sense.

To use the MVVM community toolkit, you just need to add the respective NuGet reference CommunityToolkit.Mvvm to the project which is hosting the ViewModel classes for your app.

Screenshot showing how to add the Microsoft CommunityToolkit.MVVM package to a project

The introduction to the MVVM community toolkit explains the steps in more detail.

Our more complex MVVM-Editor sample uses the Toolkit for different disciplines:

  • ObservableObject: ViewModels in the Editor sample app use ObservableObject as their base class (indirectly through WinFormsViewController – take a look at the source code of the sample for additional background info). It implements INotifyPropertyChanged and encapsulates all the necessary property notification Infrastructure.
  • RelayCommand: If the dedicated implementation of Commands is necessary, use the Toolkit’s RelayCommand, which is an extended version of the class which you already learned about earlier.

What really helps saving time though, are the code generators the MVVM Community Toolkit provides. Instead of writing the implementation for a bindable ViewModel property yourself, you just need to apply the ObservableProperty attribute over the backing field, and the code generators do the rest:

    /// <summary>
    ///  UI-Controller class for controlling the main editor Window and most of the editor's functionality.
    /// </summary>
    public partial class MainFormController : WinFormsViewController
    {
        /// <summary>
        ///  Current length of the selection.
        /// </summary>
        [ObservableProperty]
        private int _selectionLength;

You can save even more time by using the RelayCommand attribute, which you put on top of the command methods of your ViewModel. In this case, the Command property which you need to bind against the View is completely generated by the Toolkit, along with the code which hooks up the command. If the command needs to take a related CanExecuteCommand method into account in addition, you can provide that method like shown in the code snippet below. It doesn’t really matter if the method is an async method or not. In this example, converting the whole text into upper case is done inside a dedicated task to show that asynchronous methods are in WinForms as valid as synchronous methods as a base for command code generation.

        [RelayCommand(CanExecute = nameof(CanExecuteContentDependingCommands))]
        private async Task ToUpperAsync()
        {
            string tempText = null!;
            var savedSelectionIndex = SelectionIndex;

            await Task.Run(() =>
            {
                var upperText = TextDocument[SelectionIndex..(SelectionIndex + SelectionLength)].ToUpper();

                tempText = string.Concat(
                    TextDocument[..SelectionIndex].AsSpan(),
                    upperText.AsSpan(),
                    TextDocument[(SelectionIndex + SelectionLength)..].AsSpan());
            });

            TextDocument = tempText;
            SelectionIndex = savedSelectionIndex;
        }

Those are just a few examples, how the Toolkit supports you, and how the Editor sample app uses the Toolkit in several spots. Refer to the toolkit’s documentation for additional information.

Honoring Preview Features in the Runtime and the out-of-proc WinForms Designer

The WinForms Command Binding feature is currently under preview. That means, we might introduce additional functionality for the .NET 8 time frame for it or slightly change implementation details, based on feedback. So, feedback is really important to us, especially with this feature!

When you use features which have been marked as in preview – and please note, that also in a release version of .NET there can be features still in preview – you need to acknowledge in your app that you aware of using a preview feature. You do that by adding a special tag named EnablePreviewFeatures into your csproj file (or vbproj for Visual Basic .NET 7 projects), like this. The Preview Features Design Document describes this feature in detail.

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <EnablePreviewFeatures>true</EnablePreviewFeatures>
    <OutputType>WinExe</OutputType>
    <TargetFramework>net7.0-windows</TargetFramework>
    <UseWindowsForms>true</UseWindowsForms>
    <LangVersion>10.0</LangVersion>
    <Nullable>enable</Nullable>
  </PropertyGroup>

This way, the whole assembly is automatically attributed with RequiresPreviewFeatures. If you omit this, and you use preview features of the WinForms runtime, like the Command property of a button control in .NET 7, you would see the following error message:

Screenshot of the error list tool window showing messages which result from using runtime features  marked in preview.

In addition, if you tried to bind against a property of the runtime in preview, instead of the Design Binding Picker, you would see an error message, stating that you needed to enable preview features as described.

Screenshot of the Design Binding Picker with message that binding to properties in preview is not allowed.

Reusing WinForms ViewModels in .NET MAUI Apps

Examining the WinForms sample app, you will see that the actual MainForm of the app doesn’t consist of a lot of code:

public partial class MainForm : Form
{
    public MainForm()
    {
        InitializeComponent();
    }

    protected override void OnLoad(EventArgs e)
    {
        base.OnLoad(e);
        DataContext = new MainFormController(SimpleServiceProvider.GetInstance());
    }

    private void MainForm_DataContextChanged(object sender, EventArgs e)
        => _mainFormControllerBindingSource.DataSource = DataContext;
}

This is really all there is: On loading the app’s main form, we create an instance of the MainFormController and pass in a service provider. The MainFormController serves as the ViewModel which holds every aspect of the business logic – the animated gif above does show, what that means: Reporting the cursor position, creating a new document, wrapping text or converting the whole text to upper case characters – the ViewModel provides every command to do that, and all the methods are assigned to the view (the main form) via command binding.

We already saw how the command binding is done with the new Command Binding features in WinForms .NET 7. In .NET MAUI, command binding to an existing ViewModel can be done via XAML data binding. Without going too much into detail of that actual topic, this is a small excerpt of the .NET MAUI sample app’s XAML code:

<?xml version="1.0" encoding="utf-8" ?>
<ContentPage xmlns="http://schemas.microsoft.com/dotnet/2021/maui"
             xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
             x:Class="MauiEdit.MainPage">

    <Shell.TitleView>
        <Grid Margin="0,8,0,0">
            <Grid.ColumnDefinitions>
                <ColumnDefinition Width="*"/>
                <ColumnDefinition Width="Auto"/>
            </Grid.ColumnDefinitions>

            <Label 
                Margin="20,0,0,0"
                Text="WinForms/Maui Edit"
                FontAttributes="Bold"
                FontSize="32"/>

            <HorizontalStackLayout 
                Grid.Column="1">

                <Button 
                    Margin="0,0,10,0"
                    Text="New..." 
                    HeightRequest="40"
                    Command="{Binding NewDocumentCommand}"/>

                <Button 
                    Margin="0,0,10,0"
                    Text="Insert Demo Text" 
                    HeightRequest="40"
                    Command="{Binding InsertDemoTextCommand}"/>

Basically, while the WinForms designer provides a UI for linking up the commands of a data source with the form interactively, in XAML based UI stacks it is common practice to write the XAML code to design the UI, and add the binding directly in that process.

The Microsoft docs hold more information about XAML data binding in the context of MVVM and command binding (which is also called Commanding).

One additional aspect is important to point out in the context of reusing a ViewModel for different UI stacks like WinForms and .NET MAUI: When the user clicks, as one example, on New Document, the respective command is executed in the ViewModel. As you can observe in the application’s live demo gif: The user is then prompted in the WinForms version with a Task Dialog, and is asked if deleting the current text is OK. Taking a look at the respective ViewModel’s code, we see the following method:

    [RelayCommand(CanExecute = nameof(CanExecuteContentDependingCommands))]
    private async Task NewDocumentAsync()
    {
        // So, this is how we control the UI via a Controller or ViewModel.
        // We get the required Service over the ServiceProvider, 
        // which the Controller/ViewModels got via Dependency Injection.
        // Dependency Injection means, _depending_ on what UI-Stack (or even inside a Unit Test)
        // we're actually running, the dialogService will do different things:

        // * On WinForms it shows a WinForms MessageBox.
        // * On .NET MAUI, it displays an alert.
        // * In the context of a unit test, it doesn't show anything: the unit test,
        //   just pretends, the user had clicked the result button.
        var dialogService = ServiceProvider.GetRequiredService<IDialogService>();

        // Now we use this DialogService to remote-control the UI completely
        // _and_ independent of the actual UI technology.
        var buttonString = await dialogService.ShowMessageBoxAsync(
            title: "New Document",
            heading: "Do you want to create a new Document?",
            message: "You are about to create a new Document. The old document will be erased, changes you made will be lost.",
            buttons: new[] { YesButtonText, NoButtonText });

        if (buttonString == YesButtonText)
        {
            TextDocument = string.Empty;
        }
    }

Of course, the ViewModel must not call WinForms TaskDialog directly. After all, the ViewModel is supposed to be reused in or with other UI stacks and cannot have any dependencies on assemblies of a particular UI stack. So, when our ViewModel is used in the context of a WinForms app, it’s supposed to show a WinForms TaskDialog. If, however, it’s run in the context of a .NET MAUI app, it is supposed to display an alert. Depending on in what context the ViewModel is executed, it effectively needs to do different things. To overcome this, we inject the ViewModel on instantiation with a service provider, which provides an abstract way to interact with the respective UI elements. In WinForms, we pass the service for a WinForms dialog service. In .NET MAUI, we pass the service for a .NET MAUI dialog service:

    public static MauiApp CreateMauiApp()
    {
        var builder = MauiApp.CreateBuilder();

        // We pass the dialog service to the service provider, so we can use it in the view models.
        builder.Services.AddSingleton(typeof(IDialogService), new MauiDialogService());

        builder
            .UseMauiApp<App>()
            .ConfigureFonts(fonts =>
            {
                fonts.AddFont("OpenSans-Regular.ttf", "OpenSansRegular");
                fonts.AddFont("OpenSans-Semibold.ttf", "OpenSansSemibold");
            });

#if DEBUG
        builder.Logging.AddDebug();
#endif

        return builder.Build();
    }

With this, the ViewModel doesn’t need to know what exactly is going to happen, if for example the dialogService.ShowMessageAsync is called. DialogService will call different things depending on the UI context. This is the reason this approach is called dependency injection. The easiest way to learn and get a feeling what exactly is happening in both cases, is to just debug in single steps through the code paths in both apps and observe what parts are being used in the respective UI stacks. Please also note, that this sample is using the dialog service in a very simplistic way. You can find more powerful libraries on the web which are dedicated to that purpose.

Unit testing ViewModels with dependency injection

The Editor sample solution uses the same approach to provide a Unit Test (inside the solution’s unit test project) for the ViewModel. In this case, the ViewModel also gets injected with the service provider to retrieve a dialog service at runtime. This time, however, the dialog service’s purpose is not to show something visually or interact with the user, but to rather emulate the user’s input:

    public Task<string> ShowMessageBoxAsync(
        string title, string heading, string message, params string[] buttons)
    {
        ShowMessageBoxResultEventArgs eArgs = new();
        ShowMessageBoxRequested?.Invoke(this, eArgs);

        if (eArgs.ResultButtonText is null)
        {
            throw new NullReferenceException("MessageBox test result can't be null.");
        }

        return Task.FromResult(eArgs.ResultButtonText);
    }

When the ViewModel is supposed to show the MessageBox, rather than to actually show an UI component, an event is fired. This event can be then handled by the unit test method, and the emulated user response is just passed as the event result:

        .
        .
        .
        // We simulate the user requesting to 'New' the document,
        // but says "No" on the MessageDialogBox to actually clear it.
        dialogService!.ShowMessageBoxRequested += DialogService_ShowMessageBoxRequested;

        // We test the first time; our state machine returns "No" the first time.
        await mainFormController.NewDocumentCommand.ExecuteAsync(null);
        Assert.Equal(MainFormController.GetTestText(), mainFormController.TextDocument);

        // We test the second time; our state machine returns "Yes" the first time.
        await mainFormController.NewDocumentCommand.ExecuteAsync(null);
        Assert.Equal(String.Empty, mainFormController.TextDocument);

        void DialogService_ShowMessageBoxRequested(object? sender, ShowMessageBoxResultEventArgs e)
            => e.ResultButtonText = dialogState++ switch {
                0 => MainFormController.NoButtonText,
                1 => MainFormController.YesButtonText,
                _ => string.Empty
            };

Summary

Command Binding in WinForms will make it easier to modernize WinForms applications in a feasible way. Separating the UI from the business logic by introducing UI Controller can be done step by step over time. Introducing unit tests will make your WinForms app more robust. Using ViewModels in additional UI stacks like .NET MAUI allows you to take parts of a LOB app cross-platform and have Mobile Apps for areas where it makes sense. Additionally, the adoption of Azure services becomes much easier with a sound architecture, and essential code can also be easily shared with the Mobile App’s spin-off.

Feedback about the subject matter is really important to us, so please let us know your thoughts and additional ideas! We’re also interested about what additional topics of this area you would like hear about from us. Please also note that the WinForms .NET runtime is open source, and you can contribute! If you have general feature ideas, encountered bugs, or even want to take on existing issues around the WinForms runtime and submit PRs, have a look at the WinForms Github repo. If you have suggestions around the WinForms Designer, feel free to file new issues there as well.

Happy coding!

The post Using Command Binding in Windows Forms apps to go Cross-Platform appeared first on .NET Blog.

Read the whole story
spongbeaux
46 days ago
reply
Share this story
Delete

What’s new: Monitor the health and audit the integrity of your analytics rules.

1 Share

Special thanks to @romarsia for the collaboration and ideas.

 

Analytics rules in Microsoft Sentinel play a crucial role in helping SOC teams to protect the organization against cyberattacks by identifying and detecting potential threats so that they can analyze and respond quickly to security incidents. Therefore, it is important for SOC engineers to ensure their detection rules are functioning correctly and producing relevant with actionable information. Besides that, SOC engineers need to be aware of any planned or unplanned changes made to the rules to ensure compliance and integrity of effective defence.

 

Having the ability to monitor the health of analytics rules and changes helps to improve the accuracy and efficiency of security operations.

We are pleased to announce the new health and auditing monitoring capabilities for Analytics Rules.

 

With Analytics Health Monitoring, organizations can get insights into the health and rule running statuses. Besides that, SOC teams can use analytics health monitoring in the detection rule creation process in both production and pre-production environments. For instance, during development, the health information can be useful for testing and validation before deployment to production.

 

In addition, Sentinel's audit monitoring feature provides organizations with a comprehensive view of what changes were made to an analytics rule (by who, from where, and when). This helps organizations to detect any unauthorized changes that may compromise security.

 

 

Overview

 

Before we get started, let’s have a quick overview on what are being offered in the new health and auditing monitoring capabilities for Analytics Rules.

 

  • Microsoft Sentinel analytics rule health logs:
    • This log captures events that record the running of analytics rules, and the end result of these runnings—if they succeeded or failed, and if they failed, why.
    • The log also records how many events were captured by the query, whether or not that number passed the threshold and caused an alert to be fired.
    • These logs are collected in the SentinelHealth table in Log Analytics.

 

  • Microsoft Sentinel analytics rule audit logs:
    • This log captures events that record changes made to any analytics rule, including which rule was changed, what the change was, the state of the rule settings before and after the change, the user or identity that made the change, the source IP and date/time of the change, and more.
    • These logs are collected in the SentinelAudit table in Log Analytics.

 

 

How to enable health and auditing monitoring

 

To get health and auditing data from the tables described above, you must first turn on the Microsoft Sentinel health feature for your workspace. For more information, see Turn on auditing and health monitoring for Microsoft Sentinel.

 

JeremyTan_2-1675844447513.png

 

 

 

Understanding SentinelHealth and SentinelAudit table events

 

The following types of analytics rule health events are logged in the SentinelHealth table:

  • Scheduled analytics rule run.
  • NRT analytics rule run.

JeremyTan_3-1675844559685.png

 

For more information, see SentinelHealth table columns schema.

 

 

The following types of analytics rule audit events are logged in the SentinelAudit table:

  • Create or update analytics rule.
  • Analytics rule deleted.

JeremyTan_4-1675844611392.png

 

For more information, see SentinelAudit table columns schema.

 

Visit Monitor the health and audit the integrity of your Microsoft Sentinel analytics rules to get a list of statuses and suggested steps for errors.

 

 

Using health and auditing data

 

You can use the pre-built functions on these tables _SentinelHealth() and _SentinelAudit(), instead of querying the tables directly. These functions ensure the maintenance of your queries' backward compatibility in the event of changes being made to the schema of the tables themselves. In order to view data related to analytics rules, you can filter the records by SentinelResourceType or SentinelResourceKind.

 

Below are some sample queries for your reference:

 

Rules without ‘Success’ running status:

 

_SentinelHealth()
| where SentinelResourceType =="Analytics Rule"
| where Status != "Success"

 

 

Rules that have been “Auto-disabled”:

 

_SentinelHealth()
| where SentinelResourceType =="Analytics Rule"
| where Reason == "The analytics rule is disabled and was not executed."

 

 

Rule running status by reason:

 

_SentinelHealth()
| where SentinelResourceType =="Analytics Rule"
| summarize Occurence=count(), Unique_Rule= dcount(SentinelResourceId) by Status, Reason

 

 

Rule deletion activity:

 

_SentinelAudit()
| where SentinelResourceType =="Analytic Rule"
| where Description =="Analytics rule deleted"

 

 

Rule activity by rule name and activity name:

 

_SentinelAudit()
| where SentinelResourceType =="Analytic Rule"
|  summarize Count= count() by RuleName=SentinelResourceName, Activity=Description

 

 

Rule activity by caller name:

 

_SentinelAudit()
| where SentinelResourceType =="Analytic Rule"
| extend Caller= tostring(ExtendedProperties.CallerName)
| summarize Count = count() by Caller, Activity=Description

 

 

 

Besides that, we have provided an Analytics Health & Audit workbook to help you turns your health and audit data into insights quickly:

 

JeremyTan_5-1675845144281.png

 

 

Sample use case

 

Let’s walk through a sample use case on the usage of analytics health and audit data.

 

In my environment, I discovered a rule that failed to run with the reason “The analytics rule execution encountered an issue and could not be completed.

 

JeremyTan_6-1675845226793.png

 

 

In order to analyse the running history of this rule, I have filtered _SentinelHealth() with SentinelResourceName equals to the impacted rule with a longer time range. The results show that the rule was running fine up until recently.

 

JeremyTan_7-1675845271098.png

 

Next, I proceed to check whether there were any changes made on this rule by querying _SentinelAudit(). The output shows that there were some changes made on the impacted rule.

 

JeremyTan_8-1675845313085.png

 

In the records, I can drill down to ExtendedProperties column where I will find the orignal values of all the rule’s properties and also the updated values under the column named OriginalResourceState and UpdatedResourceState. Besides that, I have the context of who performed the change, when, and from which IP address, which will be useful for the investigation process.

 

JeremyTan_9-1675845387166.png

 

 

By comparing the OriginalResourceState and UpdatedResourceState values, I was able to identify what has changed for the rule. In this use case, the query was updated with a reference to a cross-workspace.

 

JeremyTan_0-1675847722185.png

 

Upon investigation, the root cause was due to the cross-workspace being referenced in the rule no longer exisiting.

 

 

I hope you found this sample use case helpful in understanding the usage and process of analyzing analytics health and audit data. Understanding how to utilize health and audit data can be crucial in identifying and resolving analytics issues that may arise.

 

 

Learn more

 

More information can be found in the following documentation:

Concept: Auditing and health monitoring in Microsoft Sentinel | Microsoft Learn

How to enable: Turn on auditing and health monitoring in Microsoft Sentinel | Microsoft Learn

How to use: Monitor the health and audit the integrity of your Microsoft Sentinel analytics rules | Microsoft Learn

SentinelHealth table schema: Microsoft Sentinel health tables reference | Microsoft Learn

SentinelAudit table schema: Microsoft Sentinel audit tables reference | Microsoft Learn

 

 

Read the whole story
spongbeaux
46 days ago
reply
Share this story
Delete

Transport Tycoon fan remake OpenTTD gets largest update in years

1 Comment

OpenTTD 13.0 been released, which is "one of the largest releases we've done in several years" according to the developers. If you don't know OpenTTD, it's an open source and free fan remake of Transport Tycoon which greatly expands, polishes and modernises the beloved business sim. This latest update improves the interface further, tweaks the world generation, and more.

Read more

Read the whole story
spongbeaux
46 days ago
reply
OpenTTD is amazing
Share this story
Delete

Barilaro’s office denied bushfire funds in Labor seats

1 Comment

Former NSW deputy premier John Barilaro’s office effectively excluded Labor electorates from urgent bushfire recovery funding in a Black Summer grants program that “lacked integrity”.

A report by the NSW Auditor-General revealed although there was no designated role for the then National Party leader in the grants program, his office implemented a $1 million minimum for bushfire recovery projects.

The Department of Regional NSW gave the then deputy premier’s office a list of 35 projects to be funded in a fast-tracked first round in 2020, listing their electorates, but the threshold ruled out projects in areas held by the Labor Party, the audit found.

The report said it was unclear why the department listed the electorates as they did not form part of the selection process and that Mr Barilaro’s office’s role “deviated from the guidelines”.

The $541.8 million Bushfire Local Economic Recovery program was jointly funded by the state and federal governments and administered by NSW to pay for projects in bushfire-ravaged communities to help create jobs and protect against future disasters.

The report, tabled in NSW parliament on Thursday, showed 21 projects worth more than $95 million were funded in coalition seats and one worth $12.5 million in an independent electorate.

The Auditor-General said the fast-tracked first round of the program lacked integrity and transparency.

The report acknowledged most of the worst-affected regions were held by coalition MPs but badly-ravaged areas including the Blue Mountains and Tenterfield were among those excluded.

Blue Mountains mayor Mark Greenhill, who accused the government of rorting at the time, said he felt vindicated by the audit.

“This is a disgrace of the highest order. People ripped off were people terrorised and impacted by bushfires,” he told AAP on Thursday.

“To play politics with the victims of bushfires is unacceptable, unconscionable and un-Australian.”

The department told the auditor-general’s office projects in some of those rejected regions, including the Blue Mountains, were funded in later rounds of the program or by the Commonwealth.

Mr Barilaro denied pork barrelling over the grants at a 2021 parliamentary inquiry, saying the first round focused on destroyed buildings, 90 per cent of which were in coalition seats.

AAP has contacted Mr Barilaro’s lawyer for comment.

Premier Dominic Perrottet said he would consider possible improvements in light of the report but denied the funding was pork barrelling.

“I know from my time as premier dealing with the flood response, we’ve ensured that every community across NSW got back on their feet as quickly as possible,” he told reporters.

“That’s been my focus and previously during the bushfires – was to allocate as much funds as possible to provide assistance.”

The report recommended the department establish stronger guidelines, including clear assessment criteria.

Treasurer Matt Kean said he had introduced a strong governance model.

“The public can have confidence that taxpayers’ money is going to its intended purpose,” he said.

The NSW opposition said the audit showed the government does not represent everyone.

“It shouldn’t matter who you voted for,” Labor leader Chris Minns said.

“If you need help from your own government the funds should be supplied, not someone checking who your MP is.”

The audit found other rounds of bushfire grants largely aligned with guidelines and were supported by documentation but could have been strengthened by more detail on projects’ eligibility.

– AAP

The post Barilaro’s office denied bushfire funds in Labor seats appeared first on The New Daily.

Read the whole story
spongbeaux
57 days ago
reply
I'm shocked. Shocked.
Share this story
Delete
Next Page of Stories