Half an Hour View RSS

A place to write, half an hour, every day, just for me.
Hide details



Canadian Freeze-Dried Food Companies 1 Apr 6:48 AM (4 days ago)

(This post is really short, but it was banned by the moderators on Reddit, so it's here)


 I've been researching freeze-dried food companies for my upcoming bike-packing trip. Because of recent events, I want to be sure to select only brands that are Canadian. Here are some top freeze-dried food companies for Canadians.

Happy Yak
Based in Quebec, Happy Yak offers a diverse range of freeze-dried meals, including vegetarian and lactose-free options, emphasizing real ingredients and wholesome nutrition. https://happyyak.ca/

Pure Choice Foods
Located in Kanata, Ontario, Pure Choice Foods provides high-quality freeze-dried fruits, vegetables, meats, and treats, focusing on additive-free and gluten-free products. ​purechoicefoods.ca

Flat Out Feasts
This Canadian company specializes in keto-friendly freeze-dried meals, eliminating unnecessary carbohydrates to provide nutrient-dense options for outdoor enthusiasts. https://flatoutfeasts.ca/

Briden Solutions
Based in Calgary, Alberta, Briden Solutions offers a wide selection of freeze-dried and dehydrated foods suitable for camping and emergency preparedness. ​bridensolutions.ca



Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Context Awareness and Context Recognition in Modern Decision-Making - 2 12 Mar 12:27 PM (24 days ago)

 


Section Two: Context

In this section we look at the concept of context as it applies to both human and computer cognition. After laying out some perspectives of context and its role in decision-making, we identify and describe three major interpretations of context: as a schema, as a frame, and as a model.

Perspectives on Context

Context is “a complex description of shared knowledge about physical, social, historical, or other circumstances within which an action or an event occurs… (that) does not intervene explicitly in a problem solving but constrains it” (Brézillon, 2004).

Dey and Aboud (1999) define context as “any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves” (See also Aboud, Dey, et al., 2001; See Zainol and Nakata (2010, pp. 126-127) for additional definitions along the same lines).

Winograd (2001, p. 5) argues that context is defined by use rather than by features. “Context is an operational term: something is context because of the way it is used in interpretation, not due to its inherent properties.” (Winograd, 2001) He offers a communication and application programming architecture using a ‘blackboard’ metaphor that supports context-aware computing.

Sato (2003, p. 1324) argues that we should represent context through “a pattern of behavior or relations among variables that are outside of the subjects of design manipulation and potentially affect user behavior and system performance.” He describes a three-part strategy for context-sensitivity: sensing contextual changes, re-configurable architecture, and creating and managing contexts (p. 1327).

Dourish (2004) describes an incompatibility between two views of context.

Guarino & Guizzardi (2015, 2016) offer an account of context as a ‘scene’ such that “events emerge from scenes as a result of a cognitive process that focuses on relationships: relationships are therefore the focus of events” (2016, p.2) and where ‘scenes’ are whatever occurs in a certain region of spacetime.

Types of Context

Some types of context, as described in the literature:

Context-Aware Decision Support

Computational context-aware decision support (CaDS) systems are evolving from ontology-based expert systems to attribute-based neural network systems. A (CaDS) system “consists of a situation model for shared situation awareness” (Feng, et al., 2009, p. 455).

Such systems are intended to address information overload, for example, in a Tactical Information Prioritization System (TIPS) (Marmelstein et al., 2008, p. 259). The aim of such research “is to enhance the decision-maker's perception, comprehension, and projection of the underlying knowledge space”(Hanratty, et al., 2009, p.1).

Dourish and Bellotti (1992, p. 107) state that awareness is an understanding of the activities of others, which provides a context for your own activity. “Awareness supposes that one is able to transform pieces of contextual knowledge into a proceduralized context at the current focus of attention” (Mäkelä et al., 2018, p. 7253).

To date, context-aware decision support systems have been designed along the lines of expert systems, employing "ontology-based decision support” and consisting of “sensor agents to detect raw-level data, a context management agent for handling context data, an information service agent, an operational decision support agent, and user agents for maintaining user information” (Song et al., 2010, p.1).

Contemporary approaches are studying the use of deep learning. Early work found “the intuition of equating the template attribute weights to neural network weights resulted in a good method to learn the weights directly from observation of prior agent behaviour” (Gonzalez, 2004, p. 169) supporting Context-based Reasoning (CxBR) as “a human behaviour modelling technique that uses this approach to model human behavior.”

Context Awareness and Recognition

‘Concept awareness’ denotes the capability or fact of being aware of context; by contrast, ‘context recognition’ describes the process or method of achieving context awareness.

Bricon-Souf and Newman (2007) describe context awareness as including "the ability... to detect, sense, interpret, act and respond to aspects of the environment, such as location, time, temperature or user identity."

We could say it is the ability to examine the environment and react to the dynamical changes such as the location of user, the collection of nearby people, hosts, and accessible devices, and adapt their behavior based on the context of the application and the environment.

We see similar definitions of 'context awareness' applied to both human and computer applications. Dey (1999) for example writes “A system is context-aware if it uses context to provide relevant information and/or services to the user, where relevancy depends on the user's task.”

A variety of context recognition mechanisms may be employed. For example, in a survey of research on context recognition in surgery, Pernek and Ferscha (2017) identify the following:

In computer applications, the predominant mechanisms employed were machine learning or neural network-based pattern recognition algorithms. For example, Radu et al. (2018) study “the benefits of adopting deep learning algorithms for interpreting user activity and context as captured by multi-sensor systems” (p. 157.2). Similarly, Billones et al. (2018) discuss the use of deep learning for vehicular context recognition. Alajaji et al. (2020) “propose DeepContext, a deep learning based network architecture for recognizing a smartphone user's current context.”

Context as schema

The general sense of a schema is as a semantic representation consisting of a form of representation as a generalisation combined with elements or blank spaces that are filled by concrete particulars to constitute an instance of the schema (Bartlett, 1932, Corcoran and Hamid, 2016).

Context may be thought of as “a mental codification of experience that includes a particular organised way of perceiving cognitively and responding to a complex situation or set of stimuli” (Merriam-Webster, 2024). There are senses of ‘schema’ in logic, psychology, computer science. Schemas may be thought variously as:

Schemas are recognized to be constantly changing. Bartlett’s “concept of schema emphasises the dynamic and evolving nature of these cognitive constructs, which continuously adapt as we encounter new information” (Main, 2023).

Schema Development

Depending on the discipline or perspective, schema development may be described as ‘orientation’, ‘view’, or ‘case-based’.

‘Orientation’ is one of the steps in the OODA loop, discussed above. Orientation is depicted as “a schema to elucidate the role of human cognition (perception, emotion, and heuristics) in defense planning in a non-linear world characterized by complexity, novelty, and uncertainty” (Johnson, 2023, p. 43).

In IT and database development for systems such as JBI-IM (discussed above), context is represented as the development of various ‘views’ representing various ways to display underlying data schemas.

Schema Activation

Schema ‘activation’ is the deployment or retrieval of a schema to be applied or descriptive of a particular situation, and is often depicted as a cognitive process. For example: “Activating schemata and training students to use reading strategies are both generally effective in reading comprehension skills” (Cho and Hyun, 2020, p. 49). “Through schema activation, judgments are formed based on internal assumptions (bias) in addition to information actually available in the environment” (Worthy et al., 2024).

Schema Change

Schemas may change either through accommodation or assimilation of new data through either a top-down or bottom-up process.

For example, in the Composition Modeling Framework (CMF) (Staskevich et al., 2007), "when existing schemas change on the basis of new information, we call the process accommodation. In other cases, however, we engage in assimilation, a process in which our existing knowledge influences new conflicting information to better fit with our existing knowledge, thus reduc(ing) the likelihood of schema change” (Worthy et al., 2024).

Context as frame

A ‘frame’ is most generally thought of as an organisation of experience (Goffman, 1974) and in this sense more of a cognitive or psychological construct than semantic. It is an interpretation of reality “that puts the facts or events referred to in a certain perspective” (Morasso, 2012, p. 5).

From a more computational perspective, Minky’s (1974) account is an elaboration of the schema. “Here is the essence of the theory,” writes Minsky. “When one encounters a new situation (or makes a substantial change in one's view of the present problem) one selects from memory a structure called a Frame. This is a remembered framework to be adapted to fit reality by changing details as necessary.”

Similarly, in their consideration of choice theory in uncertain conditions, Tversky and Kahneman argue that “the normative and the descriptive analyses of choice should be viewed as separate enterprises” (1986, p. s275) with framing describing the former (for example, where someone is risk-tolerant or risk-averse).

Lakoff (2010, p. 71) describes frames as “structures (that) are physically realized in neural circuits in the brain. All of our knowledge makes use of frames, and every word is defined through the frames it neurally activates.”

Examples

The concept of a frame is at once less formal and more detailed than the schema, and consists not only of a generalised description of a situation or collection of data, but also objectives, expectations or values. These are illustrated with the following examples:

Frame Vs Framework

A frame should be distinguished from the related but distinct concept of the ‘framework’. The latter is not a cognitive or psychological construct, but rather a method or process designed to explain, guide or improve decision-making (for example, Elgoff and Smeets, 2023, p. 502). In this context, a framework is best viewed as a decision-making or design tool (see ‘Decision-Making, above).

Context in metaphor

Metaphor is a powerful instrument for creating and representing frames in cases where literal representation is insufficient.

“The concepts that govern our thought are not just matters of intellect,” writes Lakoff (1980, p. 3). The metaphor ‘argument is war,’ for example, “is one that we live by in this culture; it structures the actions we perform in arguing.” Similarly, Taylor (2008, p. iii) writes, “The conception of literal meaning adopted by both semantic and pragmatic metaphor theorists, which roughly indicates an adherence to a lexical authority and conventionally accepted grammar, is far too limited in scope to account for what is generally taken to include literal meaning in the use of language.”

Metaphor may be thought of “as an eminently cultural linguistic phenomenon”, however, “There are several different ways of thinking about the nature of context in metaphor production that is not necessarily cultural” (Kövecses, 2017, p. 307).

Metaphors both define and are defined by context. “The purpose of metaphorical framing is to convey an abstract or complex idea in easier-to-comprehend terms by mapping characteristics of an abstract or complex source onto characteristics of a simpler or concrete target” (Wikipedia, 2024). It “tends to illuminate certain aspects while obscuring others” (Norscini and Daniela, 2024, p. 14). Thus a complex phenomenon is rendered more concrete.

Context as model

Context as a model is predominantly found in the form of a ‘context model’. “Context models are used to illustrate the operational context of a system - they show what lies outside the system boundaries” (Kurkovsky, 2024; Sommerville, 2015, Chapter 5).

In an ontology, a context model helps define a subject using a semantic analysis of information related to the subject. Wang et al. (2004, pp. 18-19) describe several informal context modelling approaches and present a formal context ontology. A software system context model “explicitly depicts the boundary between the software system and its external environment” (Johnston, 2021). A physical system context model may define an environment for a software simulation, for example, digital twin (Sahlab et al., 2022, p. 463).

Large language models (LLM) also have mechanisms to define context. For example, a ‘context window’ defines the request space for an LLM. A recently released version of Google Gemini defines a 1 million token context window that allows it “to understand up to one hour of video, 11 hours of audio, over 700,000 words (so it could read, digest and answer questions about Tolstoy's War & Peace) or over 30,000 lines of code” (Pichai, 2024).

Today, model context protocols (MCP) are used by generative AI systems such as Claude as a mechanism connecting them to underlying systems and information such as graphs and databases on local filesystems or accessible in the cloud (Anthropic, 2024).

Types of Model

It is beyond the scope of this review to identify and define the full scope of models and model technology; the typologies offered below provide a sense of this scope with respect to context.

Process Models:

Business Models:

Computational Models:

Validation

Models are intended to serve as representations of processes, data or physical environments. As such, unlike schemas or frames, models have a unique requirement of validation. The following terminology is employed:

In a wider context, other criteria and terminology may be used to evaluate models, for example, model fit and measurement invariance (Goldammer et al., 2024). Similarly, an ‘inference to the best explanation’ model minimally consists of the following:

Additionally, theory evaluation may consider ‘epistemic virtues’ such as simplicity, paucity, or commensurability.

Summary

In this section we considered the nature and attributes of context as it in forms human and computational cognition, and in particular, expanded upon three major interpretations of context: as a schema, as a frame, and as a model.

It is not clear that any individual interpretation of context offers a comprehensive understanding of decision-making as referenced in section 1. The three interpretations of context are themselves contextual in nature, offering a mixture of mechanism and metaphor in an effort to convey an intuitive understanding of the subject.

In the next section, we will examine the role of data in the decision-making process generally and offer a broader decision-making model that explicitly incorporates contextual factors.

--

Image source: Figma https://www.figma.com/resource-library/context-diagram/

This article is based on work completed for Defence Research and Development Canada, Contract Report DRDC-RDDC-2025-C035

References will be listed after the series is complete.



Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Context Awareness and Context Recognition in Modern Decision-Making - 1 12 Mar 12:25 PM (24 days ago)

 


Section One: Modern Decision-Making

Introduction

Today's decision-making process takes place in an increasingly dynamic and complex environment. Today’s decision-making domains include not only the traditional physical environments but also digital age domains such as cyber and information. The speed at which the environment changes has increased and decision-makers require new ways to adapt to changing contexts in which they create change and fulfill intents.

This report presents the results of a scoping study in which the changing dynamics of context recognition and awareness are mapped and a decision-making model based on this mapping is outlined. It is divided into four main sections.

Literature Review

This review identifies relevant materials in the federal science database, focusing on key terms within the field, in addition to related work found through expanded-search techniques on the World Wide Web. The review was conducted within the context of the pre-defined concepts of ‘command and control’, but may be applied to wider contexts.

The review followed considerations related to definitions of ‘context’ and subthemes such as ‘context awareness’ and ‘context recognition’, which depict ‘context’ as a type of generalisation that can lead to appropriate response. The review identified three major interpretations: context as schema, context as frame, and context as model.

Defining Context

Context may be defined as “any information that can be used to characterise the situation of entities” The term ‘context’ is frequently used in conjunction with terms related to capability or capacity, such as ‘context recognition’ or ‘context awareness’.

Context Awareness is the “current characterization (as described by pattern, scenario, type or template) of the situation of entities”, ie., “the ability to detect, sense, interpret, act and respond to relevant aspects of the environment, such as location, time, temperature or user identity,” where relevance is described by the current task or set of objectives, that could enable a prediction of actor intentions or future events.

Context Recognition is “recognition of a previously characterised context (as described by pattern, scenario, type or template)”, including possible interpretations, actions and responses, supporting a prediction of actor intentions or future events

To contrast context and situation, we say that a ‘situation;’ is the state of affairs in the environment relevant to a decision or an action, while by contrast we say a ‘context’ is a type of generalisation that can be inferred from the situation that is in turn associated with specific decisions or actions. For example, the situation ma be that orange and black stripes are present in a canopy of green, while the context is that you are near a tiger.

In this discussion, context is thought of as a range of possible situation classifications such that we are able to classify the situation as, eg. class 1 or class 2. There may be different context sets addressing various perspectives of context.

The Decision-Making Environment

Decision-making itself is a structured process, following a logical progression through:

Decision making is anticipated to occur in an extremely complex and interactive future environment (such that) future operating environments will require a real time, fully networked... capability” to support “integration and synchronisation of actions”. (Tucholski, 2021 5; JP 3-0, 2022). The eventual outcome will be a cloud-based decision-making environment that incorporates a large number of "relevant data feeds as well as artificial intelligence and machine learning to enable decision-makers to maintain detailed situational awareness of the environment.” (Gordon, 2023; Cliche, 2024).

Modern Decision Science

The focus of modern decision science “is on building a framework capable to offer an effective tool for decisions in the field of force planning and operations planning” (Yuan and Singer, 2021), which requires a capacity to respond to dynamic and changing environments.

Classic decision-making science approaches environments as systems to which qualitative methods such as scenario spinning, operational gaming, or Delphi techniques, may be applied (Davis et al., 2005, p. 33). However, in the face of increasingly complex environments, “Instead of seeking to “predict” effects on a system of various alternatives and then ‘optimizing’ choice, it may be far better to recognize that meaningful prediction is often just not in the cards and that we should instead be seeking strategies that are flexible, adaptive, and robust” (Ibid., p.46).

Some modern decision-making approaches found in the literature follow. Each of them highlights the role of context in decision making in a modern environment.

OODA Loop: John Boyd's observation-orientation-decision-action metaphorical decision-making cycle (or "OODA loop") is used, for example, to make fast and accurate decisions (Maccuish,2012, p. 67). “Because they’re developed and tested in the relentless laboratory of conflict, military mental models have practical applications far beyond their original context.” In the OODA model, context plays a key role in the ‘orientation’ stage.

Orientation “involves assessing the relevance and significance of the data, understanding how it fits into the larger context, and identifying potential opportunities or threats” (Wale, 2024). The OODA loop is recognizable in the Canadian Forces Operational Planning Process (OPP), which recognizes five stages: initiation, orientation, course of action development, plan development, and plan review (CACSC, 2018, pp. 11-16). The OPP is informed by descriptions of other actors, terrain, structures, capabilities, organizations, people, and events. (ASCOPE) (CACSC, 2018, p. 18).

Intent Model: David Marquet's intent-based leadership (IBL) model “is not based on the flow of power from one individual to another as in the leader-follower model, but is instead based on a goal, or intent, shared between individuals. By analogy, the leader-follower model is similar to command and control, but the IBL model is similar to mission command” (Fernandez-Salvador, 2017).

While IBL is most often discussed from a leadership perspective, as a training model it develops a learner’s sense of context. “With IBL, learners gain experience in making sense of a problem. As they develop the solution to a problem, the problem begins to make sense, and learners begin to problem solve and adapt” (Duffy and Raymer, 2010, p.v).

Joint Decision-Making: Complex environments often require multiple organizations and branches and hence entail joint decision-making. The joint decision-making model builds on the OODA to create constructs like joint operational planning and joint information and intelliggence preparation in order to enable a systems understanding of an information environment (Sylvestre, 2022, p. 14).

Robust Decision-Making (RDM): Robust decision making (RDM) “is a quantitative, decision support methodology designed to inform decisions under conditions of deep uncertainty and complexity (to) help defense planners make plans more robust to a wide range of hard-to-predict futures” (Lempert et al., 2016, p. 2).

In contrast to “agree-on-assumptions” (Kalra et al. 2014) or “predict-then-act” approaches to decision-making, RDM takes an “‘agree-on-decisions’ approach, which inverts these steps,” using “models and data to stress test the strategies over a wide range of plausible paths into the future.”

Decision Making under Deep Uncertainty (DMDU): In cases of deep uncertainty there is not agreement on how the system works nor what future outcomes may be. Accordingly, as (Kwakkel and Haasnoot, 2019, p. 357) argue, various representations or models may apply. Scenario thinking, exploratory modelling and adaptive thinking are methods of preparing for alternative situation types. DMDU proposes a taxonomy of such methods (of which RDM, above, is one) that apply in different cases.

Situation Recognition

Modern decision theory requires situation awareness (SA) in order to comprehend which, if any, representation or model may apply. “Determining exactly what constitutes SA is a very difficult task, given the complexity of the construct itself, and the many different processes involved with its acquisition and maintenance” (Banbury and Trembley, 2004, p. Xiii). Moreover, "...models of SA refer to cognitive processes in general terms, but do not specify exactly what processes are involved and to what extent" (Ibid).

An understanding of the mechanisms of arriving at situation awareness, here called ‘situation recognition’, is required. "The test of situation awareness as a construct will be in its ability to be operationalized in terms of objective, clearly specified independent (stimulus manipulation) and dependent (response difference) variables ... Otherwise, SA will be yet another buzzword to cloak scientists' ignorance” (Flach, 1995, p. 155).

In computer vision, “‘situation recognition’ is the task of recognizing the activity happening in an image, the actors and objects involved in this activity, and the roles they play. Semantic roles describe how objects in the image participate in the activity described by the verb” (Pratt et al., 2020, p.2).

This involves (per Wikipedia):

According to Endsley (Ibid.) expert decision-makers act first to classify and understand a situation, then proceed to action selection, for example, matching to prototypical situations in memory: Dreyfus (1981) experts, deGroot (1965) chess, Mintzberg (1973) management, Kuhn (1970) science. (p 34). This process comprehends three major approaches developed in the years following Endsley’s work.

As seen through the examples below, more recent models are based on graph analysis. “Existing situational awareness systems use prebuilt situational knowledge-based symbolic reasoning, making it very difficult to infer situational knowledge building or unexpected situations in complex, time-space dynamic environments such as battlefields” (Lee et al., 2023, p. 6057).

Examples of Situation Recognition Models

Following are a few examples of situation models drawn from the literature:

Summary

Modern decision-making has evolved from a simple process, described by the OODA loop, to a complex process that involves the development and application of models based on prior knowledge, scenario building, and situation recognition.

This has shifted the emphasis in decision-making from being one in which being informed and aware is sufficient to one in which considerable pre-planning, including especially model-building, is required. The task of developing and applying such models is often a joint one, involving as well the development of collaborative processes and information networks.

--

Image source: Strategy Punk https://www.strategypunk.com/navigating-the-ooda-loop-mastering-adaptive-decision-making-in-complex-situations/

This article is based on work completed for Defence Research and Development Canada, Contract Report DRDC-RDDC-2025-C035

References will be listed after the series is complete.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Grappa-Ling With Mark Carney (2) 28 Feb 7:05 AM (last month)

 

Although he tries to add some anecdotes, the first four chapters of Carney's book read like an economic text, covering as they do the histories of value theory and of money. There are no great revelations here, though it feels like the chapters are a subtle push-back against people who still believe in mercantilism or the gold standard.

The chapters also feel like they are addressing the absolutism that characterizes a lot of political thought rooted in (some of the) principles of economics. I want to highlight a few of these that are relevant to me.

First is the distinction between what we might call 'productive' and 'unproductive' economic activity. Carney makes the point, with which I agree, that this can be pretty arbitrary. To begin, we have two sets of factors that do not contribute to the 'value' of an object:

There are also activities that are deemed 'non-productive' because

Value today is determined not by intrinsic worth but rather by subjective assessment (including the famous laws of supply and demand) as mediated by a range of other factors. Carney points to "widespread ignorance of both (this approach's) limitations and its impacts:

The danger of market economics is that it is a self-fulfilling prophecy: if that which is not in the market is not valued, then there is a tendency to bring everything that is valued into the marketplace. But the marketplace has one determinant of value: price. And as Carney says, "effective market functioning requires other sentiments, such as trust, fairness and integrity."

He makes essentially the same point in his discussion of money, reaching much the same conclusion from another direction. Money, like value, has undergone a transition over time from being backed by assets (such as land or gold) to being backed by a set of institutions, and most notably banks. This wasn't because gold is not valuable, but because being locked into a gold standard "would ultimately fail because its values were not consistent with those of society. It prioritised international solidarity over domestic solidarity."

Quite simply (and this is a bit of an overstatement) the ability to govern required the consent of the governed, illustrated by Carney through an extended discussion of the Magna Carta. "In order to function within a democracy, the authority of independent bodies must be constrained, allowing them to do only that which is necessary to pursue specific objectives, and they must be accountable to the people for their performance." (p. 77)

This leads to a discussion of the evolving role of banks, and in particular, of central banks, in preserving the value of money. They do this by fixing interest rates to regulate the cost (and therefore supply) of money, and (more recently) regulating and backing the stability of banks. This authority was historically dedicated to fighting inflation at all costs, but over time, has come to include a mandate to consider the economic health of the country more generally.

There is to my mind far too much talk of 'tough decisions' in chapter four. These decisions aren't 'tough' at all for the people who are making them, unaffected as they are by unemployment, low wages, and increasing poverty. The battle against inflation represents first and foremost the interest of those with money, and protecting the value of that money isn't 'tough' at all for them. The actual 'tough' decisions they face are the decisions that promote the welfare of the people even when it harms the interests of those with money.

In any case, Carney draws some important lessons from all of this:

These lessons in turn lead to his overall conclusion: "The value of money and the legitimacy of the Bank come from people’s trust and their belief in the fairness and integrity of the system... That is a clue to what gives money its value: resilience, solidarity, transparency, accountability and trust." (pp. 86-88)

Honestly, this is a conclusion I really want to believe. It has all the elements of a foundation for liberal democracy. I'm just not sure it can be sustained.

What we really see through these first four chapters is complex of three major forces:

None of these is going away; the events of the last decade should make that clear. 

Human needs (and human suffering) show us that some things are vital no matter what the price (including especially the basics of human sustenance). Even if labour is worth nothing, people will fight to get what they need to survive. And this is exacerbated by forces beyond our control as individuals, such as resource depletion and climate change.

Wealth and power are also not going away. We still see in some countries a willingness to invade others. And we see the influence of the wealthy over public policy and government. Rent-seeking and profit drain the resources of the poor, pushing them towards subsistence-level existence.

The need for the consent of the governed is probably the least reliable in the current age. Carney points to "resilience, solidarity, transparency, accountability and trust," but he will have to come to grips with the fact that, as Chomsky argues, these can be 'manufactured' through propaganda, deception and misinformation, rather than earned. And even if we are in the business of earning trust, it's not clear that these are the values that inform society.

I always keep in mind the most salient argument in Patrick Watson's The Struggle for Democracy - the willingness to debate and reason in good faith about the governance of a society depends on the existence of a certain level of prosperity in that society. The same is true of values. Some values - most values, even - don't survive poverty. What we really believe is ethical not always what we describe in our ethical theories.


 


Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Grappa-Ling With Mark Carney (1) 21 Feb 7:14 AM (last month)

I've been reading Mark Carney's book Value(s). It seems a reasonable read given his new place in Canadian political affairs. It's a serious book by a serious thinker, which I must say is a refreshing change in a landscape dominated by demagogues. Will I support everything he says? No. But the thinking here is well worth engaging.

Here (I say after having read a chapter and a half) is the core argument: value is based on values. 

He sets this up with an argument offered by Pope Francis:

Our meal will be accompanied by wine. Now, wine is many things. It has a bouquet, colour and richness of taste that all complement the food. It has alcohol that can enliven the mind. Wine enriches all our senses. At the end of our feast, we will have grappa. Grappa is one thing: alcohol. Grappa is wine distilled.

Humanity is many things – passionate, curious, rational, altruistic, creative, self-interested. But the market is one thing: self-interested. The market is humanity distilled. And then he challenged us: Your job is to turn the grappa back into wine, to turn the market back into humanity. This isn’t theology. This is reality. This is the truth.

Now I like this sentiment for many reasons. For one thing, it sets his values - and mine, and apparently Pope Francis's - apart from the sort of fundamentalism that reduces everything worthy in humanity to a single calculation based on self-interest. A lot follows from that.

And in particular, this calculation pushes back against a dominant contemporary theory of value: that value is based on price. This is important because it means that price isn't based on value, it is a relative and fluctuating measure based on willingness to pay, as determined by the (value-free) free market. A lot follows from this.

For example, in my own world, we have to choose between which research projects we pursue and which we don't, since we have limited time and resources. Where I work, the dominant metric has sometimes been whether there is a private sector company willing to pay for the work. Without commercial partners, the work will not proceed. This one fact has delayed my own work on personal learning environments by decades, because there's no market demand for a free product that makes access to education free.

But if values determine value, what are those values? Carney takes a stab at it in the first chapter:

The experience of the three crises suggests that the common values and beliefs that underpin a successful economy are:

Now if I were to select a set of basic values that underpin everything I do, these wouldn't be the values I would choose. That doesn't make them bad values, by any means, and a politician could do a lot worse than this (examples aren't hard to find). But I don't really think this set of values suits the purpose to which they're being applied.

To my mind, there are two (not necessarily conflicting) sets of values at work that we need to consider: first, those describing the mechanisms that will underpin a successful society (and not just a successful economy, though Carney sometimes conflates the two); and second, those describing the purpose or reason we want a successful society in the first place.

Carney's list blends the two. We see measures that make the economy (society) work better, such as fairness, responsibility and solidarity; and we see measures that speak to the purpose of an economy (society): creativity, protecting the vulnerable, improving the common good.

Moreover, when looking at those that describe the purpose of an economy (society), Carney is focused almost exclusively on a broader social purpose. Not that there's anything wrong with that, but I don't think we begin with a social purpose; rather, a social purpose (just as a social knowledge) emerges from the very real concrete practical day-to-day purposes of individuals.

I think Carney knows this, if we are to judge by his account of Adam Smith in the next chapter. Here's an except from his summary:

The central concept that links all of Smith’s works is the idea that continuous exchange forms part of all human interactions. This is not just the exchange of goods and services in markets, of meanings in language and of regard and esteem in the formation of moral and social norms. Humans are social animals who form themselves in action and interaction with each other across all spheres of their existence. 

Smith’s goal in writing The Theory of Moral Sentiments was to explain the source of humankind’s ability to form moral judgements, given that people begin life with no moral sentiments. He believed that we form our norms (values) as a matter of social psychology by wishing ‘to love and to be lovely’ – that is, to be well thought of or well regarded. 

Smith proposed a theory of ‘mutual sympathy’, in which the act of observing others and seeing the judgements they form makes people see how others perceive their behaviour (and therefore become more aware of themselves). The feedback we receive from perceiving (or imagining) others’ judgements creates an incentive to achieve ‘mutual sympathy of sentiments’ that leads people to develop habits, and then principles, of behaviour which come to constitute their conscience.

This long excerpt is necessary because it runs so contrary to contemporary caricature's of Smith as a market fundamentalist believing in only in the 'invisible hand' as a source of value and values. There's far more to it, and as I argue in Ethics, Analytics and the Duty of Care, it is this concrete, practical and day-to-day moral sentiment that defined what is right, good and valuable in our own lives. It is only after we have run any proposal through this filter than we can speak of value as being emergent from the marketplace.

I have a lot more to read from Carney's 508 page volume, but I think we're off to a good start.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Content and Cognition 18 Feb 12:54 PM (last month)

In response to a post from Clark Quinn.

My view is that if you're still channeling Paul Kirschner you are at the very least endorsing an idea of cognition as a representational system, even if not a physical symbol system, in which there is a clear separation between 'content' and other (presumably incidental) cognitive activities. That to my mind would be enough to make you a cognitivist.

This isn't contrary to situationism. It can be true that our actions can inform our cognitive state, and still have a basis in both 'content' and (presumably incidental) non-content. Consider for example Wittgenstein inferring what a person 'knows' or 'believes' about the thickness of the ice as we walks across the frozen lake. And how we act can feed back into what we 'know' through the creation of contentful experiences.

I think that to be a non-cognitivist it is necessary to be a non-representationalist. What that means is a bit difficult to tease out (since, in principle, anything can be a representation of anything, if viewed in the right way). Minimally, though, to be a representationalist is to be able to describe functionally some significant property of a person that can be shared across physical instances without respect to the physical constitution of that property. In other words, for 'content', qua content, to have physical effects (ie., to influence thoughts, experiences and behaviours).

This is where the debate on consciousness comes in. We all (presumably) have consciousness. But what is it? Many (most?) theorists say that consciousness has to be consciousness *of* something (cf Descartes' cogito). So we can draw a separation between the 'content' of consciousness, and the experience (or 'qualia'). "There must be something more than the physical elements." But must there? In my view, consciousness is experience - that is, to be conscious is to have experiences.  There's no distinction to be drawn between the two.

If you go sub-symbolic (which I think you do) then the 'representations' are patterns of neural activation. To be a cognitivist from that perspective becomes rather more difficult, as in requires holding that there are certain  patterns of activation that are common across individuals (ie., you could see pattern P in both person A and person B) and where the *pattern* - and not the physical instantiation of the pattern - is causally relevant. I can't imagine such a thing, but I guess there are some constructivists that can.

In general - the cognitivist (ie., the non-reductionist) will always say there's something (usually formal) and non physical that constitutes (actual) cognition, and that it is this 'content' that is what we are trying to pass from person to person in education. Presented in bits and pieces it can sound convincing, but when we view the mechanism as a whole, it becomes (to my mind) implausible.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

AI and Environmental Justice 12 Feb 2:21 PM (last month)

 

Responding to: Harnessing AI for Environmental Justice, by request. This originated as a Mastodon thread; I'll preserve the use of the informal 'you' throughout.

--

So, the document is addressed to activists and takes a "yes and" approach to their objections to AI, including in many cases where 'yes' isn't really the response I'd have, but I recognize the need to write to the audience.

I'm also personally less likely to use 'stories' to describe narrative framing, and more likely to use the language of 'frames' and 'setting context'. Again, though, I put this down to writing to your audience.

But I would save my main comment for the term that appears in the title, 'environmental justice'. The paper doesn't really grapple with the term until page 19, and this only in a well-placed call-out quote, specifically, " how might communities most adversely affected by climate impacts contribute to and shape conversations about the development of AI?"

And this leads to my major thought about the document as a whole: how do we ensure people are left out of AI?

I mean, when you get right down to it, AI is mathematics. And, it's more efficient to use a computer to do mathematics than to have a person do it. That's why NASA switched from human 'calculators' to computers.

This is the same for any other framing of AI. If you think of it as 'creativity', or 'content generation', or 'pattern recognition', it's going to be more efficient to use a computer than to employ a human.

That's why AI is a social justice issue.

So what do we mean, in this context, by environmental justice (which I take as closely related to social justice)? The paper posits: curiosity, transparency, accountability, diverse voices, sustainability, community, and intersectionality.

This is very much speaking to your audience, but I'm not seeing any theme or idea here. To put it in your terms: what is the story here? These are words they like to hear, but why are they here?

You could adopt a frame of 'justice as fairness' which asserts a narrative of non-harm and inclusion. But this classic of liberal ideology doesn't play well in this community.

Similarly, the utilitarian ethics,which underpins the twin ideals of beneficence and non maleficence, doesn't pay well with this audience.

Unfortunately, these are what generally tend to underline the 'consensus' on AI ethics (a false consensus, IMO, but still).

What's left? Some kind of Kantian-Marxist critical theoretical approach, or some kind of communitarian ethics-of-care approach.

Your paper takes a little from column A and a little from column B, setting up a class conflict between marginalized people and big tech, setting up a narrative of resistance, and that the same time drawing on ecotopian tropes of 'just enough', diversity, collaboration and intersectionality. Plus, from some third place, sustainability.

So, we come back to: AI is mathematics. And it might even (as I think) capture the mathematics behind cognition, sentience and consciousness (but of course you don't need to believe all that, AGI notwithstanding).

This to me leads to two threads of discussion:

1. Mathematics is undeniably good, but how much mathematics is good? What kind of mathematics is good?

2. The ethics of mathematics, captured in (1) above, and the ethics of other things.

I'll do both.

In fact, we can save a ton of time and resources through the use of mathematics.

But math isn't inherently good. Blockchain involves the wasteful use of mathematics just to make something difficult. Doing statistics just to support gambling preys on vulnerable.

And there is an inherent uncertainty involved in what we count, how we count, and who does the counting. I personality resist reframing everything in terms of 'value'. Justice does not equate to wealth,

And that leads to the second part: the contrast between a world defined by money, and everything else.

For example, why do we continue to use fossil fuels? Because it's cheaper, and industry doesn't care about anything else.

This debate has almost nothing to do with AI. I mean, maybe AI can make electrical grids more efficient, or nuclear plants safer, but it isn't at the core of the debate.

Big tech and AI are not synonymous.

Imagine someone said, when discussing math, that "resistance and refusal are important pillars." It makes no real sense.

I can see the sense of resisting oligarchy, authoritarianism, and inequality. But it doesn't follow that we should resist AI because they use AI. We should resist *their ownership* of AI.

If you take math, and apply it to everything we do, that's AI. It's all of ours to use toward, not against, a world that works for all of us.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Patterns, Facts, and AI 1 Feb 4:36 AM (2 months ago)


Replying to this thread.

I think if you scan through a hundred papers and find a pattern in those papers, and use that pattern to make something, you haven't copied anything.

Harder to do: you can also scan through a hundred papers, find the one original thing, and use that to make something. Then the question of whether you've copied is a matter of degree.

In neither case does it matter whether a machine or a human does this. What makes something ethical or not is the act, not the technology.

Anyhow. I scan though a hundred papers (more or less) every day to produce my newsletter. I try to find the original and highlight that. And my own original work is based on patterns I see in the data.

I passed over Doug's article because (in my view) it had been done before (not the least of all, by me, in 2019). There's no ethical issue or blame here; most of what is produced in the world (including most of what I produce) is not original.

Again, it's not the tech.

I think the hardest of all to produce is something original, based on a pattern no one has seen, that is useful to people.

I think that if a machine did that, there would be no real issue with the fact that it was a machine that did that, because we'd all be too busy trying to take advantage of this new knowledge.

But it's hard for a machine to find a new pattern, because there's so much pattern recognition already in human discourse. Useful is also really hard.

And just so, as Belshaw comments, "in some circles not being rabidly 'anti-AI' gets you tarred and feathered." This is a pattern of discourse. It gets magnified and reflected back and forth.

Certain patterns (eg., 'AI copies') get reified until they become 'fact'.

As @poritzj says, it's the data used to find the patterns that matters, for all sorts of statistical reasons. But who among the human pundits is honest about the complete corpus of material they draw upon?

That's where the ethics (if ethics = good science) comes into it, if at all.

If your data is bad - if it's not diverse, if it's not informed, if it's propaganda - then your pattern recognition is bad, and you reify the wrong things into facts the promotion of which is actually harmful.

That's my main criticism of most anti-AI writers: that their data is bad. They draw from popular and commercial press, or from (say) commercial publishers with a vested interest.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

How to 'Like' a Post in Bluesky using Javascript 15 Nov 2024 1:57 PM (4 months ago)

 


The Problem

If you look at the API reference for Bluesky to 'like' a post, you'll find it isn't there.

If you use the NPM repository for the AT protocol you can 'like' a post with this simple command:

    await agent.like(uri,cid)

This works - but you have to have installed and loaded the entire module to make it work, and then you're in module hell with versions and upgrades and the rest.

If you're ChatGPT you expect there's a simple API function, like this:

    await fetch('https://bsky.social/xrpc/app.bsky.feed.favoritePost',{...})

but there isn't one.

What's key about Bluesky is that it separates the application layer and the data layer. This matters a lot because it changes how we need to think about doing things like giving a post a 'like'. It's not simply a matter of performing a function. It's a matter of creating a record in a repository. Which repository? My repository.

My repository is currently hosted on Bluesky, but I can imagine that changing. The application I'm using is also hosted by Bluesky, but that could change as well. So we need to be smart about how we describe the content we're liking and where we're storing the record.

The Record

I first create the record I want to store. Like this:


        const record = {
            "$type": "app.bsky.feed.like",
            createdAt: new Date().toISOString(),
            subject: {
                uri: uri,
                py_type: 'com.atproto.repo.strongRef',
                cid: cid
            }
        }

In this record, I define the 'type' and a Bluesky 'like' record (and yeah, we could have any number of types). The 'subject' of this record is a specific post, which I identify as a 'stongRef' using both the 'uri' and the 'cid'.

The uri is the post's location as defined by the atmosphere (AT) protocol. It looks like this:   

  at://did:plc:v33us6ae36e4zl2qqijueffi/app.bsky.feed.post/3l576cazyjq2x  

The atmosphere address points to the repository where the post is located. The repository belongs to an individual who is identified by a DID - a distributed identifier. The repository is a 'post' repository, and the specific item has an identifier '3l576cazyjq2x'.

We can convert the AT address into a traditional web URL. The DID is associated with a 'handle', for example, 'guillecoru.bsky.social'. We can use the bsky.app to get that post:

    https://bsky.app/profile/guillecoru.bsky.social/post/3l576cazyjq2x

The cid is the post's content-based address. It looks like this:   

    bafyreiccpgycnovrqguhbqgthtqbpfiyvhk52svq33uox65ojqz6qqewi4  

The content ID (cid) was created by taking the content of the post and running it through a hash algorithm. You can read about the process here. I've written about it before.

The Save

Now I want to save my record to Bluesky. For this I will use a fetch() command. Here's what it looks like:

        const response = await fetch('https://bsky.social/xrpc/com.atproto.repo.createRecord', {
            method: 'POST',
            headers: {
                'Authorization': `Bearer ${accessToken}`,
                'Content-Type': 'application/json',
            },
            body: JSON.stringify({
                collection: 'app.bsky.feed.like',
                repo: did,
                record: record
            }),
        });

Fetch is making an API request to post a new record. 

The headers contain my previously obtained access Token. I obtain this when I create a session by logging in to Bluesky. It's a long string of characters.

The body of my request specifies the repository I want to add the record to. This will be my repository, identified by my DID, which I've stored in a variable called 'did'. I obtained this from Bluesky when I logged in. The body also specifies the collection I want the record to belong to, and of course contains the record itself.

That's it! In the full function I do some error checking just to make sure everything went OK.

P.S.

This isn't the complete script. I just want to focus in this post on how to 'like' a post. You would still of course have to write the login script and get the post information. Those are exercises for another day.


Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Assessment of Barriers to Educational Technology Acceptance 27 Oct 2024 2:32 AM (5 months ago)

 

Automated transcription, lightly edited for paragraph breaks.
 
Welcome to the Sunday morning session. Where the most important papers are presented.

So, Uh, a lot of this paper wasn't actually in the paper.

It's part of a project that we did in the National Research Council with an organization called DRDC

And much of the presentation wasn't actually available to be shown at the time the paper was submitted but now it is available.
 
It was part of an overall training modernization process for the defense research and development Canada. And they were looking to conduct a study on factors that inhibited their instructors from using learning technology.

Our Publications are available. In these slides, I will show a QR code at the end of the presentation where the slides are available and you'll be able to access them. The first of two presentations is the one that this one is focused on. The analysis of barriers to technology adoption. 
 
The second publication is an ontology framework for instructional strategies. Um, and that's got a lot of interesting material as well. Both of these were authored by myself and my colleague, Bruno Emond, and the paper that was created for this conference was written by myself and is a short because we have page limitations summary of of the results. 
 
My motivations for this presentation because it was kind of slammed by the critics. Also known as the reviewers. So I thought I'd put this into a bit of context.
 
I wanted to provide first of all description of the factors involved in my review. Of the study that they were doing at drdc. Now, this will be a review for a lot of people. I I recognize that and and so it's pretty low and not original. That's okay. I can live with that. Um but I found when I I talked about these results earlier some people in our community, the education, technology Community knew nothing about things, like the technology acceptance model and stuff like that. I was a bit surprised. But you know, we assume that we're all from the same discipline, some kind of social sciences background. I'm not actually, I'm from a philosophical background and a lot of people don't have this scope of knowledge. So I wanted to present that.
 
I also always wonder about how useful studies are in the literature about so-and-so's perception of such and such and the literatures filled with those have always been a bit skeptical. So this informs my understanding of the thinking behind those studies.
 
And then finally, and this is kind of meta to think about the limits of a cognitive approach to understanding things like Um, I think of that as and this is a technical term, not a disparagement of folk psychology approach. To studying technology and I would contrast it with a more eliminative approach. For example is offered by people like Paul Churchland and others and that we see today in the development of neural networks. Or, you know large database probability-based. Statistical based complex systems, which is quite a contrast to this approach. I can't deal with that in detail in this presentation. But it's in my thinking in the background.
 
So let's look at acceptance models to begin with. There's a history here. Um and again this may be familiar to many of you, but it might not be and that's the whole point of this. So, Uh, there's two types of models. That have been important over the years, one is adoption theory and this is Uh, focused on the choices that individuals make as to whether or not to use a particular technology. And by contrast we also have diffusion Theory which considers the spread of a technology across an organization. A lot of the time these two concepts are just Blended as though they're one. But in fact there are two very different phenomena. We have to be careful not to confuse them.
 
The first of these, of course is Rogers. Innovation, diffusion Theory. She uh, he describes five stages of diffusion and, you know, in retrospect. Now they seem pretty obvious awareness of the Innovation persuasion of its benefits decision to adopt the Innovation implementation of that decision and confirmation of the Innovation process. But now, you know almost 60 years more than 60 years later seems very naive.
 
It's similarly, we have as Jen's theory of planned Behavior. Now it's only 40 years ago, Um, looking at what would cause changes of intention? And and, you know, Again, it's very folk psychological but you know, changes in the salience or, you know, the significance or relevance of belief New information obviously changes in confidence or commitments, you know, personal development factors individual decisions, their skills their willpower. If you will. Their emotions and external factors time opportunity the influence of others. So this is a very individual centered, very psychologically centered, kind of approach. Does it explain technology adoption or diffusion? No, it does not.
 
So we get The widely used and well-known technology, acceptance model. Now we're only 30 years ago. And it considers attitudes rather than behavioral intentions previously, it was thought, you know, it's a conscious decision that we make to adopt technology. Now, we're just thinking of it as a matter of, you know, attitudes or ways of looking at the technology and it identifies two main predictors of adoption Behavior. The usefulness of the technology. And the ease of use of a technology and if you reflect on your own decisions, Uh, With respect to adopted technology like PowerPoint or Internet, or chat, GPT or whatever. These are two major factors but not a complete explanation, right? Um,
 
We also get following from this, the decomposed theory of planned Behavior, I just love some of these titles. That combines this model. And the tpb that I described before to depict specific beliefs, as decomposed into belief constructs, Aside. Our, our beliefs really constructs. I don't really think so, but Um,
 
Another way of looking at it is scrubs 2009. Now, we're like only 15 years ago, uh, concerns-based adoption model. And it includes three what they call diagnostic judgment-free components or three tests or surveys that are applied stages of concern, the levels of use and the Innovation configuration Maps. All this going to show that technology adoption is a complex inherently Social Development process. This we're just beginning to see the importance of social factors rather than individual decision making or psychological factors in technology adoption and therefore diffusion
 
Finally, we get what's called UTAUT. And there's also a, you taught two unified theory of acceptance and use of Technology. Which gives us a whole list of factors. Basically, taking all of the previous studies mashing them together and saying all this great big complex explains or purports to explain why people adopt technology.
 
Here's UTAUT2. And it adds three new constructs to the original Utah model. Hedonistic motivation. Price value, I don't get kind of makes sense and use Behavior. So, If we think about this, We've got a large range of different factors influencing decisions.
 
But what prevents decisions this takes us to the discussion of barriers to technology adoption. Technology's not available, that's pretty obvious. There we go. Um, reliability and complexity of the technology. Yeah. You know, nobody wants to use Outlook. It doesn't work. Uh, faculty with poor self-efficacy, may be reluctant to try them. Also makes sense. And they. Turning away from technology and influencing other people to turn away from Technologies. The skeptic in the room, kind of phenomenon.
 
So, let's think about this overall. Technology. Acceptance is individual. Yes. But it's influenced by social conditions. Desires police, fears, Etc, social factors surrounding them. But it's also organizational. The most obvious is the technology available, but a range of organizational factors, And that all of this shows, the difficulty of obtaining a structured description of Technology.
 
Now I've put on the side here, the drdc model, this is the model that the my colleagues at drdc came up with that. I was evaluating for this project and so they basically combined much of what we saw here to come up with the model technology process Administration environment and faculty those five sets of factors. We'll return to this.
 
So that leads us to risk management. Because drdc wanted to think of Technology adoption from the perspective of risk. Now, risk management is a phenomenon that doesn't come up so much for individuals. Although, of course individuals may be Risk, Takers or risk-averse, but it's more a corporate kind of thing. When they do corporate planning, they come up with risk management profiles History of this as well risk assessment models, generally measure exposure to negative consequences, they may be qualitative or quantitative or a combination thereof and they're usually subjective. They measure perceived risk or expected outcomes, sometimes they can be objective, but it's hard, right? Because we're talking about the future.
 
Risk management begins with the Fine-Kinney method. Fine comes out in 71, kidney and 76. Calculating the risk. Score based on scores of the probability of the event, happening the exposure to that event and the consequences. You know how how bad the the consequences would be each is weighted equally and then you have all kinds of different theories after that vary this uh waiting someone
 
Later on, you have the analytical hierarchy process model by Harker in 89. Uh, And this is a method for weighing combining multiple goals or outcomes multiple criteria and a classification scheme for types of risk. So you you see on the one hand, we have types of risks. And on the other hand, we have a weighting of probabilities of risk and again, things are getting complex. Uh whoops wrong button. There we go,
 
Ultimately we come up with what is widely used in the field today and we use it in NRC in every organization I've ever seen uses it. Risk Matrix which basically combines the likelihood of a bad thing happening and how bad that thing is to give you a weighting based on a green yellow red kind of scale where red is the really serious kind of risk that you might be looking at. And green is kind of a risk, kind of not
 
So, What I found interesting. When thinking about risk and risk management, is that risk is self-referential, not just as an aside here. The drdc study involved. Focus groups of people talking about the technology, why they would use it, why they wouldn't what they would consider to the risks to be. And when you discuss, you know, what's the worst thing that could happen? Your assessment of the risk? Actually kind of begins to change. Might realize, for example, it's not as bad as you thought it was or conversely. You might think it's much worse than you thought it was. You didn't realize that until you talked about it.
 
Risk perception of risk. Varies a lot depending on point of view. And interestingly, because this is happening in an organizational context. Risk. As an individual construct, is very different from risk. As an organizational construct. You may not care, but it might be fatal for the organization. But so what you can get another job, similarly, the organization might not care about the risk to you just the risk to itself?

They're using surveys. Um, How do we know these surveys actually measure anything of value? Well, that leads us to the subject of validity and reliability.
 
Um, the definitive discussion of this is the era American educational, research association discussion in 2014. But basically reliability refers to the consistency of the measure. Do you get the same results every time you do it validity refers to the accuracy of the measure. Are you getting? Or are you measuring the right thing? Kind of well, the picture is clearer than my description. As is usually the case. Um,
 
So how do we test for these one way of testing is content validity involving assessing whether the questions in the survey, cover the entire range of issues or Concepts being studied very important, obviously, It's easy to study one small factor. Out of a whole range of factors. Um, so there's different ways of Testing for Content validity. Uh, testing it against external criteria, for example. Uh looking at elements of content representativeness and you know, you would talk to experts in the field and ask them basically, are we covering everything that we need to cover.
 
Also construct validity, this is important. Um, So, The the phrase that really jumps out to me is unit, dimensionality and local Independence of those options. Now in our keynote yesterday, Um, one of the surveys he gave us obviously not a scientific one because he had us clap, right? But it was uh, how often do you use AI once a day? Once a week or once a month. Now, what's wrong with that? Well, what if you use it like, I do four times a week? It's not once a day. It's not once a week, right? His survey failed the test of construct validity, right? It measured. Three out of a much wider set of possibilities. It should have been, you know, Um, Once a day. Up two once a week or or more than once a week or more than once a month, Etc. So, those sorts of questions?
 
Criterion validity, the extent to, which the Opera Opera upper operationalization got it of a construct such as a test. Relates to or predicts a theoretical. Uh, representation of the construct. That's a bunch of fancy words for saying. Does the thing you're measuring actually relate in some way to real world conditions and the way we establish, this is through a mechanism known as a logic model. So the logic model takes the elements of your survey, what you're asking about and Maps it to the elements of reality. Uh again if you don't have that, how do you know what you're testing for is actually measuring anything in reality. Of course, the philosopher in me asks, how do you know what reality is? But Short talk.
 
Test retest reliability, simply can you get the same results over and over again? Uh, if you do the same study again, Ideally, you shouldn't do it with the same people, sometimes, you can't. So you do it with a similar group of people, but in some way you need to determine that your measure will work more than once.
 
Finally, internal consistency. The extent to which the questions in the survey are measuring the same construct. Now, we can get fancy with that, but If your surveys aren't asking on the one hand about perceptions, and then in the next question about, what's out there in the real world or asking you to give your point of view? But at the same time, asking you to take a neutral point of view, you do not have internal consistency, right? So the, the frame of representation if you will needs to be the same throughout the survey. Now, you can construct a more complex frame but You need to know what that frame is. I see it.
 
So, Uh, If we think about it and here I'm stealing from Robert John ackerman's book. Data instrument and Theory, the three major elements of study validity. This may mean main elements of scientific investigation, generally data instrument and Theory. You see I'm stealing from this book.
 
Um, With data, we have issues of provenance, completeness Etc data cleaning which comes up a lot in AI instrument. Uh, is a survey actually an instrument or is it a tool of? Creativity. Um but also, you know, what are the limitations on what we're studying, what are the limitations on our instrument, whatever it is and and we can talk about that a lot but again, short talk.
 
And then the relation between Theory and data, I'm sure you're all familiar with the concept. Of Theory, Laden data from Larry Lauden originally. Just how much is your theory influencing the data that you are representing. So, Um, again, this is not in the actual paper that appears in the volume, but we now take all of these considerations and assess the study that drdc was doing on drdc came up with their model which I referenced earlier with these five major factors and as you can see many sub factors and I'm not going to get into the details of that because they're not that relevant. Um, but that was the model.
 
So, Did an initial run of their study and came away with these takeaways. Um, so technology access and complexity, were important, process Management, support and learner needs were important. Um, the environment, the the instructor's role and legal concerns, interestingly were important. And then the training of stakeholders was important. These were the things that they found were important.
 
So, I was asked to evaluate their method. With respect to their results. So, One of the things that they tried to do is instead of looking at barriers as barriers, they looked at barriers as pathways Why? Because they're an organization, you don't want to be negative and say barriers, you want to be positive and say Pathways It was an interesting thought, but it resulted in a lot of double negatives. And in one case, triple negatives.
 
Um, So it was less focused on individual acceptance, and more focused on You know, the the acceptance as an institution. And I questioned whether that was a concept that was even coherent, right? Can you talk about institutional acceptance? So I suppose you can but not if you're asking about individual, acceptance on the survey. Um, similarly. Uh, you know, there was there were Questions about the organizational individual perception of risk. What happened is it was never clear in this survey whether respondents were talking about other people.
 
Do other people accept technology or whether they're talking about themselves, do I accept technology? Get a lot of that kind of confusion in a lot of these studies. Some individual perceptions. Were influenced by their own. Uh, perceptions their own experience. Often influenced by behavior as modeled by others, but in the organizational context, Not influenced by others is basically abstracted out of existence. It doesn't exist anymore because you can't talk about modeling by others in an organizational context. Doesn't make sense.
 
Similarly, the emphasis on role. Which is how they approached this Pathways model. Um, Basically, tried to force individuals, responding to technology, acceptance to adopt what might be called a view from nowhere or an objective stance. So, You know, these are the sorts of confusions that can happen when you're not clear about all of these categories.
 
So, That's basically where it ends. I hope you enjoyed that. For those of you who are familiar with these Concepts, I hope you enjoyed the romp. Memory lane. And for those of you who found them new, I hope you found them interesting. There is as promised the QR code for the slides. My thank you.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Coming Together 23 Oct 2024 1:37 PM (5 months ago)

 


When I first started in this business of working online some thirty years ago I was somewhat surprised to learn that it would involve more, not less, travel. And so here I sit on a red-eye to Morocco, not for the first time, to immerse myself in a world where the call to prayer happens five times daily, where I will give my talk in French, where, maybe, I will be able to buy a fez in Fes.

It's not my world as I imagined it when I was young. Oh sure, I had a wanderlust from an early age, but I really had no idea of what to expect when I got to wherever I was going. How could I, an average boy being raised in a rural community in eastern Ontario, Canada. Even if I did read the news every day - and I did, before I delivered the paper that day - it would not and could not prepare me to see the world.

And it's not an easy world to travel through. There have always been wars, but they seem to be more intense today, with the stakes being greater. There has always been differences of opinion but it feels today that we're too quick to jump toward hate. There have always been rumours and conspiracy theories, but today they're funded by state actors and spread at the speed of the internet.

And that leads me to the main point I want to talk about: the idea that we are fracturing into different factions, each with their own version of the truth, drifting toward a world where there are no facts, where nothing can be known for sure, where there are no foundations, no bases for ethics and morality, no common conception of the good, just individual tribes each fighting for a share of fewer and fewer resources on a stressed planet.

And I'm here to say that it isn't so.

No, that does not mean that I've here to announce some common foundation or shared truth that we must all believe and that can form the basis for a future society, as so many others before me have. I doubt that such a foundation exists, nor would it be universally accepted even if it did. Someone could say "here is a hand" and there will always be someone who, with good reason, will object. Ceci n'est pas une pipe. It's all wordplay.

No, what I want to question is the idea that we're fracturing. I want to question the idea that there was once a common conception of The Way The World Is, that there were trusted sources on which we could all rely, that we knew what truth is, as plainly as we know the back of our hands.

Look at that hand, maybe. If it young or old? Is it smooth or scarred? What colour is it? Do you remember all the spots you are now looking at? Were they all here last month? That large spot - is it the same size it was a year ago? I am describing my hand, of course. What is it about your hand I couldn't imagine.

You see, the dominant narrative is that we were all once one society, but now we're drifting apart. But that has never been the case. We have never been one society. Not even those of us who were living together in a small eastern Ontario village.

We were just far away enough from Ottawa that the newly improved highway meant people could live in the country and commute to the city. That's what my father did, to his job at Bell. We were a city family, and lived in a different world from those around us who made their living on the farm. There was just enough farmer in us - my mother grew up on a farm - that we could make it work.

As a child my community was defined by its edges. When we lived in a Montreal suburb, we never spoke to the 'Frenchies', who lived next door. When we lived in Metcalfe, it was a clash of expectations; I was led to value reading, good grades, chess and public speaking, but the only currency of value in a rough and tumble farming community was achievement in sports.

We - the different communities - could and did live apart. There was virtually no overlap. It was an incredibly rare and valuable person - a Ralph James - who could transcend these boundaries. Last I heard, he was working at the gas station, which of course the centre of the community.

It's not the same today. Digital technology in general and social media in particular are bringing communities together whether we want them to or not. It's called 'context collapse'. As originally envisioned, it means something like the idea that your work, family and friends, three separate contexts, are now collapsing into a single space. You see them all in the same environment - on Twitter perhaps, or Facebook, or Reddit, or Insta. That joke that would kill with your friends horrifies your parents and might get you fired at work.

If it were only that, it might be OK. But there's the aforementioned fracturing of society to consider as well. When I was growing up there were many types of people I never saw: black people, gay people, Muslims, indigenous people. I learned about them, a bit, but what I read in the news and in the library didn't really reflect the reality once I met it.

In many ways I was lucky. My love of learning led me to finish school in the community, to read people like Francis Moore Lappe while still a teen, to move across the continent to find work, to go to university, to meet gay people and Marxists and hippies, to travel to indigenous communities, and eventually, to meet with and work with and visit people around the world. It is a truly rare experience and I am incredibly grateful for having had the opportunity.

And today, having had such experiences, I find it laughable that I ever worried about how I dressed, how I looked, how I seemed to other people. The things that I thought were so important, especially when I was young, now seem so trivial in this wider world. It was as Laoze said, that all these things, these definitions of what is and what isn't, what's right and what's wrong, what is valued and what is worthless, are all artifices we create, and not part of some underlying 'reality', whatever that is.

But if all this is true (I hear you ask) then how could the dominant narrative be false? We live in a world now where instead of being one society, people are quoting Laoze and creating their own facts, their own truths, and even inventing their own sense of right and wrong. How can our society survive if we can't even talk to each other.

I'm here to say, our society isn't fracturing; it was always fractured, always divided into myriad subsocieties. True, it never felt divided. But that was because these others were beyond our view. They were there, but we literally did not see them, literally did not know that they existed, and even if we had any inkling, we didn't want to know.

It's like when I was a kid. It's not like black people gays, and Indigenous nations didn't exist. But they existed in my consciousness, if they existed at all, only as fables, only as an artifice created by news media and television and even the books I read and classes I took.

As I grew older, as I read more, as I traveled more, it was like a mist separating me from them began to lift, and I could see - at least partially - into their communities.

And this same thing is what is happening on a global scale. We are not fracturing, we are growing closer together. People and cultures and societies who were never a part of our daily existence and now right in front of us, unavoidable. A gay person is gay, and you have to look. A Muslim references Allah, and you have to listen. Someone you met online talks about Tang poetry, and you have the whole library to your fingertips (with Google to translate it for you).

It's no surprise that people want the fog to come back down, so that things will be the way they were. But nothing gets better that way. The old prejudices return, the old myths prevail. Some cultures are repressed, other cultures are exploited, and others threaten the well-being of an entire planetary community.

We will never be united, never share a single common language or world view or way of life, but at least now we can see that each other exists. We're not required to agree with them, to share their values, or to believe what they believe. But what we can do, at least, is talk with each other - to see them as less of the 'other' and more like different versions of ourselves.

People talk of this being the atomic age, the space age, the information age - but in reality, it is the age of the great coming together. All the possibilities of human existence are laid out before us like some great smorgasbord. It can be hard, challenging, frustrating, but is, for those of us who choose it, rewarding and enlightening.

Where we were apart, we are now together, a dense mesh of possibility.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Automated Translation in Javascript with Google Translate API 12 Oct 2024 5:09 PM (5 months ago)

The main thing to know here is that Google's AI does not know how Google's API interface works. It will give you advice that works fine with GET requests but translating anything fancy requires POST, which Google's AI doesn't understand. That's what I spent most of my afternoon discovering.

The rest of my afternoon was spent figuring out how it actually works, though. There are two major steps: first, setting up access to the translation API, and second, actually coding the Javascript. The second step is the easier step.

Setting Up Access

Note that I have a Google account and therefore access to the Google console. You'll need to set this up with Google Cloud. There may be costs involved; I have a business account which I pay for but I assume this would still work with a personal account. If you need to pay money anywhere I'm sure Google will tell you.

1. Create a Project in Google Cloud

2. Enable 'Cloud Translation' in your project

3. Create an apiKey in your project 

Coding the Javascript

The trick here is that you're using async functions with fetch requests. This forces Javascript to wait for a reply from the translation engine.

1.  The input form

  
  <input type="text" id="inputText" placeholder="Enter text to translate">
  <input type="text" id="inputProject" placeholder="P{roject ID}">
  <input type="text" id="inputAPI" placeholder="Enter API key">
  <button onclick="handleTranslation()">Translate</button> 
  <p id="detected-lang"></p>
  <p id="translated-text"></p>

This will give you an input form that looks like this:

2. Get the data from the form

The rest of the code will be in Javascript and placed in between the <script> and </script> tags. First, we get data from the form:

async function handleTranslation() {
    const inputText = document.getElementById('inputText').value;
    const apiKey = document.getElementById('inputAPI').value;
    const projectId = document.getElementById('inputProject').value;
    const translatedText = await processTranslation(inputText,apiKey,projectId);
    document.getElementById('translated-text').innerText = translatedText;
}

We get the three values from the form, pass them to a function processTranslation(), then display the result on the web page.

3. Process translation

We're going to proceed in two steps. First, we're going to detect what language the original text is in. Then, we'll translate it to English.

    async function processTranslation(inputText,apiKey,projectId) {
    try {
        const detectedLang = await detectLanguage(inputText,apiKey,projectId);
        
        if (detectedLang === 'en') {
            console.log('The text is already in English:', inputText);
            return inputText;
        }

        const translatedText = await translateToEnglish(inputText,apiKey,projectId,detectedLang);
        return translatedText;
    } catch (error) {
        console.error('Error during translation:', error);
    }
}

This function  performs the two processes with error checking. Notice that if the language detected is English, we won't actually perform the translation. 

4. Detect language

The Google language API has a 'detect languiage' function with its own URL. Here we'll access that function with fetch(), and return the result.

async function detectLanguage(inputText,apiKey,projectId) {
    const response = await fetch(
        `https://translation.googleapis.com/language/translate/v2/detect?key=${apiKey}`, 
        {
            method: 'POST',
            headers: { 'Content-Type': 'application/json' },
            body: JSON.stringify({ q: inputText })
        }
    );

    const data = await response.json();
    const detectedLang = data?.data?.detections[0][0]?.language || 'unknown';
    console.log(`Detected Language: ${detectedLang}`);
    return detectedLang;
}

The Google API expects the apiKey in the URL being requested by fetch(), which is (to me, at least) a but unusual. Our input text is placed into JSON as the request body. The result is found deep inside the JSON returned by the API and placed in the variable detectedLang.

5. Translate the text

We use fetch() again, defining the API key just as previously, and include the new text and the source language in the request body.

async function translateToEnglish(inputText,apiKey,projectId,sourceLang) {
    const response = await fetch(
        `https://translation.googleapis.com/language/translate/v2?key=${apiKey}`, 
        {
            method: 'POST',
            headers: { 'Content-Type': 'application/json' },
            body: JSON.stringify({
                q: inputText,
                source: sourceLang,
                target: 'en',
                format: 'text'
            })
        }
    );

    const data = await response.json();
    const translatedText = data?.data?.translations[0]?.translatedText || '';
    console.log('Translated Text:', translatedText);
    return translatedText;
}

The result is again found deep in the JSON returned by the API. I save it to the console log (so you can see it by inspecting the page) and also return it so it can be displayed on the web page.

That's our function!

Some Notes

As I mentioned, the data is returned in a JSON structure called 'data' from fetch(). We need to fish it out of that structure; that's what the long string data?.data?.translations[0]?.translatedText refers to.

Here's what the actual JSON might look like:

data: {
  "data": {
    "translations": [
      {
        "translatedText": "Hello, world!",
        "detectedSourceLanguage": "ja"
      }
    ]
  }
}

The question marks return 0 if there's no value for 'data', which would result in the functioning returning '' instead of failing with an error.

Also...

Thanks to Blogcrowds for the Parse HTML page that made it easy for me to add code in this post. 

And finally...

Here's the full page. You'll have to create your own project and API values. https://www.downes.ca/translate.html




Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Artificial Intelligence in Education: Context, Rules, and Limits 26 Sep 2024 11:38 AM (6 months ago)

 

Good morning, ladies and gentlemen, greetings from Canada. Thank you for inviting me to speak at this important forum on artificial intelligence and education. Here is a QR code that you can scan to access these slides, with a transcript, just in case my French is not clear.

Context, Rules, and Limits

We will talk about artificial intelligence in education in general terms. I will start by explaining what AI is and how it works, very briefly, and then I will discuss the three main themes of the conference: context, rules, and limits.

I interpret context as “use,” and I will look at the many ways in which AI can be used in education.

I interpret rules as “principles,” and I will look at what are the fundamental principles that govern how and why we use AI in education.

And I interpret the limitations as practical guidelines and will look at appropriate mechanisms for teaching and learning, governance, and security.

What is AI

We have two types of AI. Weak AI, also called narrow AI, is capable of performing a specific task for which it was designed. Strong AI, on the other hand, is capable of learning, thinking, and adapting as humans do. That said, strong AI systems do not yet exist. In this sense, the term “AI” is an aspiration, not a description of what has actually been created.

Weak AI

Then we have two types of weak AI. An expert system is based on rules, facts, goals, and domain-specific knowledge provided by human experts. Expert systems are based on ontologies, which define what kinds of objects have what kinds of properties. Knowledge, rules, and ontologies are described using a system of formal symbols, i.e., a language.

Machine learning models, on the other hand, are entirely data-driven. Input data is processed by mathematical and statistical algorithms, resulting in what are called “models.” Data can be in any form and are represented as mathematical “vectors” of data.

Machine learning

Deep learning is a type of machine learning. Traditional machine learning is “supervised,” meaning that it requires human intervention to train the model. Deep learning, on the other hand, uses artificial neural networks to mimic the learning process of the human brain and is capable of unsupervised learning.

Deep learning

Almost all recent advances in artificial intelligence rely on deep learning. Deep learning is a method that allows computers to learn by example, much like humans do. It uses a layered structure of algorithms called neural networks to process data, recognize patterns, and make decisions.

Neural Networks


What do we mean when we say that a neural network learns? Neurons receive inputs, process them, and produce an output. They are organized into distinct layers: an input layer that receives the data, several hidden layers that process that data, and an output layer that provides the final decision or prediction. 

There are adjustable parameters within these neurons called weights and biases. As the network learns, these weights and biases are adjusted. So when we say that a neural network has “learned,” we mean that its weights and biases have been successfully adjusted.

Neurons

So what are weights and biases? Weight is the strength of the connection between one neuron and another. Not all connections are created equal. Some are stronger than others.

Bias is the sensitivity of the receiving neuron. How likely is it to respond to input signals? This is one way to understand the importance of that particular neuron. The result is sent through an “activation function” that determines whether the neuron sends a signal to the next layer of connected neurons.

There are many types of neural network algorithms. They are all based on different ways of adjusting the parameters presented here.

The AI Workflow


Creating and using artificial intelligence is the result of a complete workflow that starts with collecting data and ends with applying an AI model at scale.

When we think, for example, of ChatGPT, we’re talking about how it collected data from places like Twitter and Reddit and other sources on the web, processed the data, and applied it at scale in the form of an application that people can use on the web.

Because there are many different types and methods of designing AI algorithms, it becomes more relevant at this point to talk about the different types of artificial intelligence by considering their context of use.

How do we think?

Also, the different types of artificial intelligence raise an important fundamental question for educators: How do humans think? What are we actually doing when we see, plan, and decide?

Many people think that human knowledge and learning are based on rules and categories. This belief often goes hand in hand with the belief that all thought takes place in language. Language is the basis of logic and reason. But then, how do babies and animals think? Or maybe we think they don’t think at all.

Other people think that human reasoning is based on probability and statistics. Economists sometimes seem to think this way. Are there laws of thought, as there might be laws of nature?

I think that knowledge and learning are based on pattern recognition. Everything we experience, everything we think, forms a deep and complex neural network that works by recognizing patterns in data. Most of this happens automatically, unconsciously.

But if that’s true, how do we explain why we did what we did, or why we believed what we believed? Do we really have reasons? Or are the reasons we give just phrases that sound good in context?

We should think about these questions as we move forward.

Context

As I said earlier, I interpret context as “use” and will examine the many ways AI can be used in education.

In the literature, four traditional applications of AI are distinguished: descriptive, diagnostic, predictive, and prescriptive. Recently, generative AI has been added. And I predict a sixth type of application: deontic AI.

Descriptive AI – “What happened?”


Descriptive analytics includes analyses focused on description, detection, and reporting, including mechanisms to extract data from multiple sources, filter it, and combine it. The output of descriptive analytics includes visualizations such as pie charts, tables, bar charts, or line graphs. Descriptive analytics can be used to define key indicators, identify data needs, define data management practices, prepare data for analysis, and present the data to a viewer. (Vesset, 2018).

Typical applications include:

It often relies on human labeling or categorization.

There is a significant overlap here between descriptive analytics and data science and data literacy.

AI diagnostics “Why did this happen?”


Diagnostic analytics looks deeper into the data to detect patterns and trends. Such a system could be thought of as being used to draw an inference about a piece of data based on patterns detected in the sample or training data, for example, to perform recognition, classification, or categorization tasks.

Applications include:

Anomaly Detection

Anomaly detection involves tasks that target the identification of outliers or extreme data points in a large data set.

Dependency modeling and regression

Dependency modeling targets the identification of specific associations between data points that might otherwise go unnoticed.

Clustering and classification

These tasks segment data into similar clusters based on the degree of similarity between data points.

Summary

Condensing data for easier reporting and consumption

while avoiding the loss of more valuable and detailed information that we can use for clearer decision-making.

For example:


Predictive AI – “What will happen?”


Predictive analytics answers the question: what will (likely) happen, based on identifying patterns and trends in existing data, and extrapolating that pattern or trend to likely future states. For example, such an analytics system might look at a student’s participation in a course and then predict whether the student will pass or fail.

Typical applications include:

Prescriptive AI: “How do we get there?”

Prescriptive analytics recommends solutions. Using prescriptive analytics can inform a human user or a computer system of a need that must be satisfied. Such needs can be generated from rules or principles, operating limits or boundaries, balancing equations or mechanisms, or user input. The requirement for a solution can be based on the existence of a need combined with a prediction that the need has not been or will not be met. For example, the analysis can predict increasing pressure levels that exceed the tolerance of a pipeline.

Some typical applications:

Generative AI: “creating something new”

Generative analytics uses past data analyses to generate original content based on parameters or properties of the data being studied, combined with predictions or requirements for future data. For example, generative analytics can use Picasso’s library of paintings as data, and then generate new Picasso-style paintings based on photographs or drawings.

Some examples of generative AI:

Today we have many applications of generative AI that have made headlines, such as ChatGPT, Claude, Anthropic, which can generate new content such as articles and essays, software, audio recordings, artworks, etc. To many people, this seems like the whole of AI, but as we can see, it is only a small part of it.

Deontic AI - “What should happen?”

Deontic analysis answers the question “what should happen?” » This is a class of analytics that examines expressions of feelings, needs, desires, and other such factors to determine what kind of outcome would be best, and then works to achieve that outcome. In this sense, it is the use of analytics to inject ethical, political, or cultural order into the environment, whether it is a discussion list, resource allocation, or personnel management.

Applications include:

Learn more

You can read more about all of these applications in the online MOOC I wrote on ethics and learning analytics.

Rules

As I said earlier, I interpret the rules as “principles” that govern the ethical and responsible use of AI in education.

Current approaches

There have been many studies conducted by institutions around the world on the appropriate use of AI. They have looked at the use of AI in general and at its use in specific contexts, such as education.

Links to all of these studies can be found in the notes on this page in the Powerpoint slides

I also conducted a study of these and other studies a few years ago; you can find the reference in my ethics course.

Here I present a subset of the ethical principles that many of these reports found and that the authors believe should govern our use of artificial intelligence in education.

These findings are often presented as a consensus, but in my own analysis, they are interpreted differently by professions and societies. While I describe them here, it is important that you consider them from your own perspective.

Transparency

Transparency is the idea of ??saying when AI is used, how it is used, what recommendations or decisions it has made, and why it has made them.

The idea of ??“explainable AI” is related to transparency, that is, understanding why an AI worked the way it did, so that we know that it worked in accordance with facts, values, and expectations.

Justice and fairness


Justice is often associated with fairness, although these two concepts are of course distinct. Justice is related to respect for and enforcement of laws, as well as the idea of ??a “just society” or a “fair society.” As a result, in most literature describing laws or rules governing the use of AI, justice is often cited alongside concepts such as consistency, inclusion, equality, and access.

Non-maleficence

This stems from the principle of medical ethics: do no harm. In education, this is particularly relevant to the safety of students and children.

Keywords: Non-maleficence, safety, security, harm, protection, precaution, prevention, integrity (bodily or mental), non-subversion

Privacy

This principle includes the concepts of security and confidentiality. Throughout the entire AI workflow, this condition comes into play in many places.

It is important to establish data protection laws that make the collection and analysis of educational data visible, traceable, and verifiable by teachers, students, and parents. In Europe, the collection of personal data is governed by the GDPR, which includes the “right to be forgotten.”

Many people argue that the effectiveness of AI will depend on surveillance, which will impact the privacy of students and teachers.

There is also the question of how the data will be used. Will it be limited to educational purposes or will it be used for advertising?

Beneficence

The idea of beneficence is to do good to individuals and society. There are many aspects to this, but the purpose of each is a beneficial intention or goal in using AI.

In education, this includes things like improving learning outcomes, increasing access, and reducing costs.

How should these goals be accomplished?

Some of the elements considered include the idea that we should “emphasize the autonomy and social well-being of students in the process of integrating AI-based tools” (IENESCO 2021) and also the principle of “duty of care,” which is the idea that you should consider the well-being of anyone you have entrusted with a service.

This is based on an ethic of care, which is the idea that ethics is based on a relationship between the service provider and the recipient based on open and trusting communication. It is the idea of listening to the person and accepting what they say about their needs as honest and factual.

Freedom and autonomy

In education, the concepts of freedom and autonomy are linked to the idea of ??student agency. This does not mean that students can do whatever they want, but it does recognize that in some settings it is best to allow and empower them to make their own choices and decisions.

To support agency, educators are encouraged to cultivate a learner-centered use of AI, that is, to reinforce and reiterate the authority and autonomy of humans over their own learning and the tools they use to support their learning.

This includes the principle of “human-centered AI,” that is, a definition of AI based on human needs and values, with human oversight and decision-making.

For example, UNESCO’s mandate inherently requires a human-centered approach to AI. "The aim is to refocus the debate on the role of AI in the fight against s current inequalities in access to knowledge, research and diversity of cultural expressions, and to ensure that AI does not exacerbate technological divides within and between countries.”

The principle of “informed consent” plays a key role here. It is the idea of ??transparency combined with the idea that individuals have a choice about whether or not to use AI.

Fit for purpose

This goal is one that I created to bring together a number of different and related goals from various sources. It includes, for example:

Another one, this is outside our domain, but there is the question of the use of AI in policing and warfare

Some observations

It is often assumed, or even explicitly stated, that the values contained in these ethical codes, and in ethics in general, are common, fundamental, and universal. But that is not the case.

For example, some discussions of ethics in artificial intelligence and analytics simply assume that privacy is a right and should be respected. But when we argue against that, and we must argue against that, then we have to ask ourselves what is the basis for such an assertion.

After all, privacy protects criminals and innocents alike.

And maybe we will feel that we should just balance between two options, but what makes such a consequentialist approach, a technical approach, to balance the right approach? You wouldn’t balance killing and not killing. Would you?

Every society approaches these questions from a different angle, drawing its own conclusions on different grounds. Some societies are based on individual rights. Others take an approach that balances risks and benefits. Still others prioritize the social good or social order.

No one can determine what the rules are for your own society except you. How you decide, what you decide, how you implement AI in learning and in society at large, is up to you.

Limits

As I said earlier, I interpret limits as practical guidelines. Now I will look at appropriate mechanisms for teaching and learning, governance, and security.

Practical guidelines

We talk about limits, and that’s fine, but what we need in the classroom and on campus are practical guidelines for action. In this part of my talk, I will draw on the Alberta guidelines because they provide a solid foundation for structuring the conversation, although I caution you again that it is up to you to decide what they look like in your own schools and whether they are sufficient for your needs.

I also recommend to teachers and students the UNESCO AI Competency Frameworks (links are in the slide notes)

And to come back to the previous topic about how we think – while I know it is tempting, especially for administrators, to frame an approach to AI in terms of rules and principles, this may not be how we think about complex topics like this, because as we know, rules tend to help us only in the easy cases, but there are many cases where we have to rely on our own judgment and intuition.

Learning

The first practical principle is the need to support the learning of AI. We cannot use it in a practical way if we do not understand it. This means, for example, that we:

Structure

The second principle is about creating institutional structures:

These things can be done by the national government, but they can also be done in individual schools and institutions to meet specific cases and local requirements

Safety

Safety is a top priority.

AI literacy must include a data literacy foundation

Governance

Governance is about how decisions will be made about the use of AI in education and what those decisions should take into account.

For example:

Thank you

As I said at the beginning of this talk, it is important to note that this is just one topic in the much larger field of technology in education.

AI will be integrated into learning management systems, social media, virtual reality, security, and cryptography, including blockchain. There will be a synergistic effect as these technologies combine.

What we have talked about today is just the beginning of what I think will be a very exciting time for teaching and learning, here in Morocco and around the world.

Thank you, and again, here is a QR code that you can scan to access these slides, a transcript, the audio, and hopefully the video of this presentation.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

L'intelligence artificielle dans l'éducation : contexte, règles et limites 26 Sep 2024 2:58 AM (6 months ago)

 

Mesdames et Messieurs, bonjour, je vous transmets mes salutations du Canada. Merci de m’avoir invité à prendre la parole à cet important forum sur l’intelligence artificielle et l’éducation. Voici un code QR que vous pouvez scanner pour accéder à ces diapositives, avec une transcription, juste au cas mon français ne serait pas compréhensible.

Contexte, règles et limites

Nous allons parler de l’intelligence artificielle dans l’éducation en termes généraux. Je commencerai par expliquer ce qu’est l’IA et comment elle fonctionne, très brièvement, puis j’aborderai les trois principaux thèmes de la conférence : le contexte, les règles et les limites.

J’interprète le contexte comme « l’utilisation » et j’examinerai les nombreuses façons dont l’IA peut être utilisée dans l’éducation.

J’interprète les règles comme des « principes » et j’examinerai quels sont les principes fondamentaux qui régissent la manière et les raisons pour lesquelles nous utilisons l’IA dans l’éducation.

Et j’interprète les limites comme des lignes directrices pratiques et j’examinerai les mécanismes appropriés pour l’enseignement et l’apprentissage, la gouvernance et la sécurité.


 

Qu'est-ce que l'IA

Nous avons deux types d’IA. L’IA faible, également appelée IA étroite, est capable d'effectuer une tâche spécifique pour laquelle elle a été conçue. L'IA forte, en revanche, est capable d'apprendre, de penser et de s'adapter comme le font les humains. Cela dit, les systèmes d'IA forts n'existent pas encore. En ce sens, le terme « IA » désigne une aspiration et non une description de ce qui a réellement été créé.


 

IA faible

Ensuite, nous avons deux types d’IA faible. Un système expert est basé sur des règles, des faits, des objectifs et des connaissances spécifiques à un domaine fournies par des experts humains. Les systèmes experts sont basés sur des ontologies, qui définissent quels types d’objets ont quels types de propriétés. Les connaissances, les règles et les ontologies sont décrites à l’aide d’un système de symboles formels, c’est-à-dire d’un langage.


 

Les modèles d’apprentissage automatique, en revanche, sont entièrement pilotés par les données. Les données d’entrée sont traitées par des algorithmes mathématiques et statistiques, ce qui donne lieu à ce que l’on appelle des « modèles ». Les données peuvent se présenter sous n’importe quelle forme et sont représentées sous forme de « vecteurs » mathématiques de données.


 

Apprentissage automatique

L'apprentissage profond est un type d'apprentissage automatique. L'apprentissage automatique traditionnel est « supervisé », ce qui signifie qu'il nécessite une intervention humaine pour entraîner le modèle. L'apprentissage profond, en revanche, utilise des réseaux neuronaux artificiels pour imiter le processus d'apprentissage du cerveau humain et est capable d'un apprentissage non supervisé.


 

Apprentissage profonde

Presque toutes les avancées récentes en intelligence artificielle reposent sur l’apprentissage profond. L’apprentissage profond est une méthode qui permet aux ordinateurs d’apprendre par l’exemple, à la manière des humains. Il utilise une structure en couches d’algorithmes appelés réseaux neuronaux pour traiter les données, reconnaître des modèles et prendre des décisions.


 

Réseaux neuronaux

Que veut-on dire quand on dit qu’un réseau neuronal apprend ? Les neurones reçoivent des entrées, les traitent et produisent une sortie. Ils sont organisés en couches distinctes : une couche d’entrée qui reçoit les données, plusieurs couches cachées qui traitent ces données et une couche de sortie qui fournit la décision ou la prédiction finale. Il existe des paramètres ajustables au sein de ces neurones appelés pondérations et biais. Au fur et à mesure que le réseau apprend, ces pondérations et biais sont ajustés. Ainsi, lorsque nous disons qu’un réseau neuronal a « appris », cela signifie donc que ses pondérations et ses biais ont été ajustés avec succès.


 

Les neurones

Que sont donc les pondérations et les biais ? La pondération est la force de la connexion entre un neurone et un autre. Toutes les connexions ne sont pas égales. Certaines sont plus importantes que d'autres.

Le biais est la sensibilité du neurone récepteur. Quelle est sa probabilité de réagir aux signaux d'entrée. C'est une façon de comprendre l'importance de ce neurone particulier. Le résultat est envoyé via une « fonction d'activation » qui détermine si le neurone envoie un signal à la couche suivante de neurones connectés.

Il existe de nombreux types d'algorithmes de réseaux neuronaux. Ils sont tous basés sur différentes manières d'ajuster les paramètres présentés ici.


 

Le flux de travail de l'IA

La création et l’utilisation d’une intelligence artificielle sont le résultat d’un flux de travail complet qui commence par la collecte de données et se termine par l’application d’un modèle d’IA à grande échelle. Lorsque nous pensons, par exemple, à ChatGPT, nous parlons de la façon dont il a collecté des données à partir d’endroits comme Twitter et Reddit et d’autres sources sur le Web, traité les données et les a appliquées à grande échelle sous la forme d’une application que les gens peuvent utiliser sur le Web.

Parce qu’il existe de nombreux types et méthodes différents de conception d’algorithmes d’IA, À ce stade, il devient plus pertinent de parler des différents types d’intelligence artificielle en considérant leur contexte d’utilisation.


 

Comment pensons-nous ?

Aussi, les différents types d’intelligence artificielle soulèvent une question fondamentale importante pour les éducateurs : comment les humains pensent-ils ? Que faisons-nous réellement lorsque nous voyons, planifions et décidons ?

Beaucoup de gens pensent que la connaissance et l’apprentissage humains sont basés sur des règles et des catégories. Cette croyance va souvent de pair avec la croyance selon laquelle toute pensée se déroule dans le langage. Le langage est la base de la logique et de la raison. Mais alors, comment pensent les bébés et les animaux ? Ou peut-être pensons-nous qu’ils ne pensent pas du tout.

D’autres personnes pensent que le raisonnement humain est basé sur les probabilités et les statistiques. Les économistes semblent parfois penser de cette façon. Existe-t-il des lois de la pensée, comme il pourrait y avoir des lois de la nature ?

Je pense que la connaissance et l’apprentissage sont basés sur la reconnaissance de modèles. Tout ce que nous vivons, tout ce que nous pensons, forme un réseau neuronal profond et complexe qui fonctionne en reconnaissant des modèles dans les données. La plupart de ces choses se produisent automatiquement, inconsciemment.

Mais si c’est vrai, comment expliquer pourquoi nous avons fait ce que nous avons fait, ou pourquoi nous avons cru ce que nous avons cru ? Avons-nous vraiment des raisons ? Ou les raisons que nous donnons sont-elles simplement des phrases qui sonnent bien dans le contexte ?

 

Nous devrions réfléchir à ces questions à mesure que nous avançons.


 

Contexte

Comme je l’ai dit plus tôt, j’interprète le contexte comme une « utilisation » et j’examinerai les nombreuses façons dont l’IA peut être utilisée dans l’éducation.

Dans la littérature, on distingue quatre applications traditionnelles de l'IA : descriptive, diagnostique, prédictive et prescriptive. Récemment, on a ajouté l'IA générative. Et je prédis un sixième type d'application : l'IA déontique.

 




 

IA descriptive - « Que s'est-il passé ?»

L'analyse descriptive comprend des analyses axées sur la description, la détection et la création de rapports, y compris des mécanismes permettant d'extraire des données de plusieurs sources, de les filtrer et de les combiner. Le résultat de l'analyse descriptive comprend des visualisations telles que des graphiques à secteurs, des tableaux, des graphiques à barres ou des graphiques linéaires. L'analyse descriptive peut être utilisée pour définir des indicateurs clés, identifier les besoins en données, définir les pratiques de gestion des données, préparer les données pour l'analyse et présenter les données à un spectateur. (Vesset, 2018).

Les applications typiques incluent :

·   Suivi

·   Analyse des systèmes

·   Conformité institutionnelle

·   Profils des étudiants

·   Tableau de bord (dashboard)

Il dépend souvent de l'étiquetage (labeling) ou de la catégorisation humaine.

Il existe ici un chevauchement (overlap) important entre l'analyse descriptive et la science des données et la l’alphabétisation des données.

Diagnostic IA « Pourquoi est-ce arrivé?»

L'analyse diagnostique examine plus en profondeur les données afin de détecter des modèles et des tendances. Un tel système pourrait être considéré comme étant utilisé pour tirer une inférence sur un élément de données en fonction des modèles détectés dans les données d'échantillon ou de formation, par exemple, pour effectuer des tâches de reconnaissance, de classification ou de catégorisation.

Les applications incluent :

Détection d'anomalies

La détection d'anomalies implique des tâches ciblant l'identification de valeurs aberrantes ou de points de données extrêmes dans un vaste ensemble de données.

·   Par exemple, détection de spam, détection de plagiat, détection de contrefaçons

·   Sécurité

·   Contrôle d'accès

·   Surveillance

Modélisation des dépendances et régression

La modélisation des dépendances cible l'identification d'associations spécifiques entre des points de données qui pourraient autrement passer inaperçues.

·   Par exemple, analyse des sentiments

·   Échantillonnage d'opinion

Regroupement et classification

Ces tâches segmentent les données en clusters similaires en fonction du degré de similitude entre les points de données.

·   Par exemple, Notation automatisée

·   Évaluation des compétences

Résumé

Condenser les données pour faciliter la création de rapports et leur utilisation tout en évitant la perte d'informations plus précieuses et plus détaillées que nous pouvons utiliser pour une prise de décision plus claire.

Par exemple, résumé du contenu

-

Transcription audio et vidéo

Soutenir les besoins spéciaux

Analyse des commentaires des pairs


 

IA prédictive – « Que va-t-il se passer ?»

L'analyse prédictive répond à la question : que va-t-il se passer (probablement), en se basant sur l'identification de modèles et de tendances dans les données existantes, et sur une extrapolation de ce modèle ou de cette tendance aux états futurs probables. Par exemple, un tel système d'analyse peut examiner la participation d'un étudiant à un cours, puis prédire si l'étudiant réussira ou échouera.

Les applications typiques incluent :

·   Planification des ressources

·   Conception de l'apprentissage

·   Tests utilisateurs

·   Identification des étudiants à risque d'échec

·   Conseils pédagogiques

·   Éducation de précision

·   Recrutement d'étudiants


 

IA prescriptive : « Comment pouvons-nous y parvenir ? »

L'analyse prescriptive recommande des solutions. L'utilisation de l'analyse prescriptive peut informer un utilisateur humain ou un système informatique d'un besoin qui doit être satisfait. De tels besoins peuvent être générés à partir de règles ou de principes, de limites ou de limites de fonctionnement, d'équations ou de mécanismes d'équilibrage, ou d'une saisie utilisateur. L'exigence d'une solution peut être basée sur l'existence d'un besoin combiné à une prédiction suggérant que le besoin n'a pas été ou ne sera pas satisfait. Par exemple, l'analyse peut prédire des niveaux de pression croissants qui dépassent la tolérance d'un pipeline.

Quelques applications typiques :

·   Recommandations d'apprentissage

·   Apprentissage adaptatif

·   Formation de groupes adaptatifs

·   Appariement de placement

·   Embauche

·   Tarification

·   Prise de décision

 

 

 

 

IA générative : "créer quelque chose de nouveau"

L'analyse générative utilise des analyses de données antérieures pour générer du contenu original basé sur des paramètres ou des propriétés des données étudiées, combinés à des prédictions ou des exigences pour les données futures. Par exemple, l'analyse générative peut utiliser comme données la bibliothèque de peintures de Picasso, puis générer de nouvelles peintures de style Picasso basées sur des photographies ou des dessins.

Quelques exemples d'IA générative :

·   Chatbots

·   Contenu généré par l'IA

·   Animation générée automatiquement

·   Coaching et tutorat

·   Enseignants artificiels

·   Curation de contenu

·   Robotique

Nous disposons aujourd’hui de nombreuses applications d’IA générative qui ont fait la une des journaux, comme ChatGPT, Claude, Anthropic, qui peuvent générer de nouveaux contenus tels que des articles et des essais, des logiciels, des enregistrements audios, des œuvres d’art, etc. Pour beaucoup de gens, cela ressemble à toute l’IA, mais comme nous pouvons le constater, ce n’est qu’une petite partie de celle-ci.


 

Deontic AI - « Que devrait-il se passer ? »

L'analyse déontique répond à la question « que devrait-il se passer ? » Il s'agit d'une classe d'analyses qui examinent les expressions de sentiments, de besoins, de désirs et d'autres facteurs de ce type afin de déterminer quel type de résultat serait le meilleur, puis s'efforcent d'atteindre ce résultat. En ce sens, il s'agit de l'utilisation de l'analyse pour injecter un ordre éthique, politique ou culturel dans l'environnement, qu'il s'agisse d'une liste de discussion, d'une allocation de ressources ou de la gestion du personnel.

Les applications incluent :

·   Normes communautaires

·   Influencer le comportement

·   Identifier le mal

·   Amplifier le bien

·   Définir ce qui est juste

·   Changer la loi

·   Modérer le discours

·   Apaiser la détresse

 


 

En savoir plus

Vous pouvez en savoir plus sur toutes ces applications dans le MOOC en ligne que j’ai rédigé sur l’éthique et l’analyse de l’apprentissage.


 

 

Règles

Comme je l’ai dit plus tôt, j’interprète les règles comme des « principes » qui régissent l’utilisation éthique et responsable de l’IA dans l’éducation.


 

Approches actuelles

De nombreuses études ont été menées par des institutions du monde entier sur l’utilisation appropriée de l’IA. Elles ont porté sur l’utilisation de l’IA en général et sur son utilisation dans des contextes spécifiques, comme l’éducation.

Les liens vers toutes ces études peuvent être trouvés dans les notes de cette page dans les diapositives Powerpoint

·   DÉCLARATION DE MONTRÉAL POUR UN DÉVELOPPEMENT RESPONSABLE DE L'INTELLIGENCE ARTIFICIELLE 2018

·   Principes de l'OCDE pour une gestion responsable d'une IA digne de confiance - 2024

·   Processus d'Hiroshima - Principes directeurs internationaux pour les organisations développant des systèmes d'IA avancés - 2023

·   UNESCO - L'intelligence artificielle dans l'éducation

·   Alberta School Boards Association - Orientations stratégiques sur l'intelligence artificielle


 

Approches actuelles

J'ai également mené une étude sur ces études et d'autres il y a quelques années ; vous pouvez trouver la référence dans mon cours d'éthique.

Je présente ici un sous-ensemble des principes éthiques que bon nombre de ces rapports ont trouvés et qui, selon les auteurs, devraient régir notre utilisation de l’intelligence artificielle dans l’éducation.

Ces conclusions sont souvent présentées comme un consensus, mais, selon ma propre analyse, elles sont interprétées différemment selon les professions et les sociétés. Bien que je les décrive ici, il est important que vous les considériez selon votre propre point de vue.


 

La transparence

La transparence est l’idée de dire quand l’IA est utilisée, comment elle est utilisée, quelles recommandations ou décisions elle a prises, et pourquoi elle les a prises.

L’idée d’une « IA explicable » est liée à la transparence, c’est-à-dire la compréhension des raisons pour lesquelles une IA a fonctionné comme elle l’a fait, de telle manière que nous sachions qu’elle a fonctionné conformément aux faits, aux valeurs et aux attentes.


 

Justice et équité

La justice est souvent associée à l'équité, même si ces deux concepts sont bien sûr distincts. La justice est liée au respect et à l'application des lois, ainsi qu'à l'idée d'une « société juste » ou d'une « société équitable ». En conséquence, dans la plupart des ouvrages décrivant les lois ou les règles régissant l'utilisation de l'IA, la justice est souvent citée aux côtés de concepts tels que la cohérence, l'inclusion, l'égalité et l'accès.


 

Non-malfaisance

Cela découle du principe d’éthique médicale : ne pas nuire. Dans le domaine de l’éducation, cela concerne particulièrement la sécurité des étudiants et des enfants.

Mots clés : Non-malfaisance, sécurité, sûreté, préjudice, protection, précaution, prévention, intégrité (corporelle ou mentale), non-subversion


 

Confidentialité

Ce principe inclut les concepts de de sécurité et de confidentialité. Dans l'ensemble du flux de travail de l'IA, cette condition entre en jeu à de nombreux endroits.

Il est important d'établir des lois sur la protection des données qui rendent la collecte et l'analyse des données éducatives visibles, traçables et vérifiables par les enseignants, les élèves et les parents. En Europe, la collecte de données personnelles est régie par le RGPD, qui inclut le « droit à l'oubli ».

De nombreuses personnes soutiennent que l'efficacité de l'IA dépendra de la surveillance, ce qui aura un impact sur la confidentialité des élèves et des enseignants.

La question de l'utilisation des données se pose également. Seront-elles limitées à des fins éducatives ou seront-elles utilisées à des fins publicitaires ?


 

Bienfaisance

L'idée de bienfaisance est de faire du bien aux individus et à la société. Cela comporte de nombreux aspects, mais le but de chacun est une intention ou un objectif bénéfique dans l'utilisation de l'IA.

Dans l'éducation, cela comprend des choses telles que l'amélioration des résultats d'apprentissage, l'augmentation de l'accès et la réduction des coûts.

Comment ces objectifs doivent-ils être accomplis ?

Parmi les éléments pris en compte, on peut citer l’idée selon laquelle nous devrions « mettre l’accent sur l’autonomie et le bien-être social des étudiants dans le processus d’intégration des outils basés sur l’IA » (IENESCO 2021) et également le principe du « devoir de diligence », qui est l’idée selon laquelle vous devez tenir compte du bien-être de toute personne à qui vous avez été confié un service.

Cela se fonde sur une éthique de la diligence, qui est l’idée que l’éthique repose sur une relation entre le prestataire de services et le destinataire basée sur une communication ouverte et confiante. C’est l’idée d’écouter la personne et d’accepter ce qu’elle dit sur ses besoins comme étant honnête et factuel.


 

Liberté et autonomie

Dans le domaine de l’éducation, les concepts de liberté et d’autonomie sont liés à l’idée d’agentivité des élèves. Cela ne signifie pas que les élèves peuvent faire ce qu’ils veulent, mais cela reconnaît que dans certains paramètres, il est préférable de leur permettre et de leur permettre de faire leurs propres choix et décisions.

Pour soutenir l’agentivité, il est recommandé aux enseignants de cultiver une utilisation de l’IA centrée sur l’apprenant, c’est-à-dire de renforcer et de réitérer l’autorité et l’autonomie de l’humain sur son propre apprentissage et sur les outils qu’il utilise pour soutenir son apprentissage.

Cela inclut le principe de « l’IA centrée sur l’humain », c’est-à-dire une définition de l’IA basée sur les besoins et les valeurs humaines, avec une supervision et une prise de décision humaines.

Par exemple, le mandat de l’UNESCO exige intrinsèquement une approche de l’IA centrée sur l’humain. « L’objectif est de recentrer le débat sur le rôle de l’IA dans la lutte contre les inégalités actuelles en matière d’accès au savoir, à la recherche et à la diversité des expressions culturelles, et de veiller à ce que l’IA n’aggrave pas les fractures technologiques au sein des pays et entre eux. »

Le principe du « consentement éclairé » joue ici un rôle essentiel. Il s’agit de l’idée de transparence combinée à l’idée que les individus ont le choix d’utiliser ou non l’IA.


 

Adapté à l'usage

Cet objectif est celui que j’ai créé pour rassembler un certain nombre d’objectifs différents et liés provenant de diverses sources. Il comprend, par exemple :

·   La ​​quête de la connaissance

·   Les cinq objectifs des maqasid (la protection de la religion, de la vie, de l'intellect, de la descendance et des biens) (Talal Agil Attas Alkhiri, p. 743)

·   Valeur et bénéfice – nous devons nous demander qui bénéficie de l’utilisation de l’IA, par opposition à qui en paie le coût

·   Pertinence pédagogique – Utilisation appropriée, fondée sur la recherche pédagogique, fondée sur des preuves, pertinence aux besoins des apprenants, IA centrée sur l’enfant, adaptée au développement

·   Fiabilité et précision – par exemple, les gens utilisent l’IA pour détecter la tricherie et le plagiat, mais l’IA ne peut pas détecter ces cas avec précision

Un autre, cela sort de notre domaine, mais il y a la question de l’utilisation de l’IA dans le maintien de l’ordre et dans la guerreriate


 

Quelques observations

On présume souvent, voire affirme explicitement, que les valeurs contenues dans ces codes éthiques, et dans l’éthique en général, sont communes, fondamentales et universelles. Mais ce n’est pas le cas.

Par exemple, certaines discussions sur l’éthique en intelligence artificielle et en analyse supposent simplement que la vie privée est un droit et doit être respectée. Mais lorsque nous nous opposons à cela, et nous devons nous opposer à cela, nous devons alors nous demander sur quoi se fonde une telle affirmation.

Après tout, la vie privée protège les criminels comme les innocents.

Et peut-être que nous aurons l’impression que nous devrions simplement trouver un équilibre entre les deux options, mais qu’est-ce qui fait qu’une telle approche conséquentialiste, une approche technique, de l’équilibre est la bonne approche ? Vous ne trouveriez pas un équilibre entre tuer et ne pas tuer. Le feriez-vous ?

Chaque société aborde ces questions sous un angle différent, en tirant ses propres conclusions sur des bases différentes. Certaines sociétés sont fondées sur les droits individuels. D’autres adoptent une approche qui équilibre les risques et les avantages. D’autres encore privilégient le bien social ou l’ordre social.

Personne ne peut déterminer quelles sont les règles de votre propre société, sauf vous. Comment vous décidez, ce que vous décidez, comment vous mettez en œuvre l’IA dans l’apprentissage et dans la société en général, c’est à vous de décider.


 

Limites

Comme je l’ai dit plus tôt, j’interprète les limites comme des lignes directrices pratiques, Maintenant, j’examinerai les mécanismes appropriés pour l’enseignement et l’apprentissage, la gouvernance et la sécurité.


 

Directives pratiques

Nous parlons de limites, et c’est normal, mais ce dont nous avons besoin en classe et sur le campus, ce sont des lignes directrices pratiques pour agir. Dans cette partie de mon exposé, je m’inspirerai des lignes directrices de l’Alberta, car elles constituent une base solide pour structurer la conversation, même si je vous préviens encore une fois qu’il vous appartient de décider à quoi elles ressemblent dans vos propres écoles et si elles sont suffisantes pour vos besoins.

Je recommande également aux enseignants et aux étudiants les cadres de compétences en IA de l’UNESCO (les liens sont dans les notes des diapositives)

Et pour revenir au sujet précédent sur la façon dont nous pensons – même si je sais qu’il est tentant, surtout pour les administrateurs, de définir une approche de l’IA en termes de règles et de principes, ce n’est peut-être pas ainsi que nous envisageons des sujets complexes comme celui-ci, car comme nous le savons, les règles ont tendance à nous aider uniquement dans les cas faciles, mais il existe de nombreux cas où nous devons nous fier à notre propre jugement et à notre intuition.

Apprentissage

Le premier principe pratique est la nécessité de soutenir l’apprentissage de l’IA. Nous ne pouvons pas l’utiliser de manière pratique si nous ne la comprenons pas. Cela implique, par exemple, que nous :

-     Renforcions les capacités d’alphabétisation en IA pour les enseignants et les élèves – ce qu’est l’IA, comment l’utiliser, où il est approprié de l’utiliser, où il ne faut pas l’utiliser

-     Discutons de l’IA – créons une communauté pour parler des endroits où nous voyons l’IA autour de nous et pour réfléchir à ce que nous en pensons

-     Repensons aux tests et aux devoirs. Pouvons-nous réellement empêcher les élèves d’utiliser l’IA ? Comment pouvons-nous structurer l’éducation pour qu’ils puissent bénéficier de l’utilisation de l’IA

-     Accès. Tous les élèves n’auront pas accès à l’IA. Cela peut créer des inégalités dans la classe.

 

 

 

 


 

Structure

Le deuxième principe concerne la création de structures institutionnelles :

-     Le soutien est important, il faut donc une structure ou une organisation qui envisage l’utilisation responsable et éthique de l’IA

-     Lorsque l’IA est utilisée, des personnes informées doivent être impliquées ; lorsque la politique de l’IA est discutée, elle doit inclure des personnes qui connaissent l’IA

-     Il devrait y avoir une liste ou un inventaire des outils d’IA, et ceux-ci devraient être contrôlés, afin qu’il y ait une liste d’outils d’IA « sûrs »

-     Tous les outils ne sont pas adaptés aux enfants

Ces choses peuvent être faites par le gouvernement national, mais elles peuvent également être faites dans des écoles et des institutions individuelles pour répondre à des cas spécifiques et aux exigences locales


 

La sécurité

La sécurité est une priorité absolue.

-     Assurez-vous que les protocoles de cybersécurité prennent en compte l’IA

-     Faites particulièrement attention à ce que ces protocoles s'étendent à la sécurité des données

La maîtrise de l'IA doit inclure une base de maîtrise des données


 

Gouvernance

La gouvernance concerne la question de savoir comment les décisions seront prises concernant l’utilisation de l’IA dans l’éducation et ce que ces décisions devraient prendre en compte.

Par exemple :

     Mettre à jour ou établir des politiques pour décrire les utilisations interdites de l’IA

     Déterminer le statut juridique des données utilisées par l’IA et du contenu créé par l’IA (ces questions sont encore en cours de décision en Amérique du Nord et en Europe)

Merci

Comme je l'ai dit au début de cette conférence, il est important de noter qu'il ne s'agit là que d'un sujet parmi d'autres dans le domaine beaucoup plus vaste de la technologie dans l'éducation.

L'IA sera intégrée aux systèmes de gestion de l'apprentissage, aux réseaux sociaux, à la réalité virtuelle, à la sécurité et à la cryptographie, y compris la blockchain. Il y aura un effet synergétique à mesure que ces technologies se combineront.

Ce dont nous avons parlé aujourd'hui n'est que le début de ce qui sera, je pense, une période très passionnante pour l'enseignement et l'apprentissage, ici au Maroc et dans le monde entier.

Merci, et encore une fois, voici un code QR que vous pouvez scanner pour accéder à ces diapositives, à une transcription, à l'audio et, espérons-le, à la vidéo de cette présentation.

 

 

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Bank Street 8 Sep 2024 3:40 PM (6 months ago)

Answering the question on Reddit r/Ottawa, "What is your favorite part of Ottawa?"


Bank Street

For me, it has been everything up and down Bank Street (when I was young I lived in Metcalfe and for me Ottawa *was* Bank Street). 

- Vernon, where my Grandmother lived, 

- Spring Hill, where my father and brother rest now, 

- Ernie's on what was then highway 31, where I worked as a breakfast cook, 

- the Old Ottawa sign, 

- the Quarry where we swam on what is now John Quinn Road at Stone School Road, 

- Leo's Livestock, where the Metcalfe Walk would begin, 

- Greely, 

- Rideau-Carleton Raceway (where I once worked; I'd walk to Bank Street and hitchhike home), 

- the old K-Mart in Blossom Park (now a FarmBoy - big improvements), 

- the Conroy Pit (and the rest of the Green Belt), 

- shwarma corner at Hunt Club Road, 

- the old McDomalds at Alta Vista Plaza (across from where I bought my first car), 

- Canadian Tire at Heron, 

- Wild Birds Unlimited, 

- and the big Bell Building across the street (where I worked for Nortel for a summer), 

- Billings Bridge (my first bank account) 

- and across the road, the Bank Street Professional Centre (where a broker swung a mortgage for me), 

- the old Harvey's (I'll miss it), 

- Rideau River, 

- Chef's Paradise (if you saw my girth you'd understand), 

- Patty's Pub, 

- no more telephone poles on the road (I hit one on a bicycle just where Giant Cycle is now, wrecked my bike and went to the hospital for a few hours), 

- Ottawa Public Library (used to be a pool place where I'd hang out and smoke cigars), 

- the Canal (so many memories), 

- Lansdowne Park (where I worked 1975-1979 and am back now as a RedBlacks season ticket holder), 

- Glebe Centre (where I worked and where I hope I'm at in my late 80s), 

- Kettleman's Bagel (which was a Subway when Subway was good), 

- Bridgehead (good coffee, fair trade), 

- Joe Mamma (where I bought my bicycle that took me to Rimouski), 

- Patterson Creek, 

- the old Greyhound Station, 

- Museum of Nature (On O'Conner), 

- the Rialto Theatre (long gone, but where I'd watch movies all day for 50 cents), 

- James Street (where I used to live, and much later, James Street Pub), 

- Barrymore's (when it was great), 

- Cue & Cushion (where I would meet my dealer), 

- the Somerset Hotel (where my father stayed, an old-style pub, and what is now Gay Square), 

- Henry's (where I've spent a small fortune on camera gear), 

- Cooper Street (where I lived, across the street from the church that is now a theatre, on O'Conner), 

- L'Esplanade Laurier (beautiful while buildings, the Sugar Bowl, the egg restaurant), 

- the Jack Layton Building, 

- the Jackson Building (where I worked when I was in College), 

- the classy pool hall on the second floor at Bank and Sparks (where the Howard Jewellers is), 

- Sparks Street itself, 

- the Confederation Building (where I met my childhood M.P., Walter Baker, and he took us to the Parliamentary cafeteria), 

- Parliament Hill to the right, 

- and the Ottawa River. 

And then there's the day I walked every inch of it, from Molly McGuire's on Rideau Street, where I stayed after the last bus, all the way to where I lived as a 16 year-old in Metcalfe, walking past Greely on a cold winter day, singing Piano Man at the top of my lungs and swinging my arms to stay warm, arriving home with the dawn.

Bank Street has always taken second place, never as upscale as Rideau Street, never as low class as the Market, but it was mine, I loved every inch of it, and I still do (even though it really needs a tram and less traffic).

 


Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Free Speech Maximalists 5 Sep 2024 8:32 AM (7 months ago)


Leo Laporte said once again today "I'm a free speech maximalist" on the TWIG podcast, and once again I had the same visceral reaction to the term. We know that 'free speech maximalists' are nothing of the sort, though they frequently roll out this phrase to oppose any sort of limitation of online (or other) speech.

This goes beyond the political debate. Laport commented on Elon Musk's claim to be a free speech maximalist by saying "just use the phrase cis-gender on Twitter", suggesting (accurately) that Musk's 'maximalism' only flows one way. But it wouldn't matter even if Musk were more tolerant of progressive discourse.

That's what I'd like to make clear with this post by making it clear that there is a huge swath of free speech that the 'free speech maximalists' restrict'.

- least controversially, child sexual abuse materials (CASM) and related offensive materials, including snuff films and their ilk. This stuff should never see the light of day, and I have yet to find any 'free speech maximalists' defend it

- treasonous speech and speech that harms the national interest in a significant way, including leaking military secrets and the rest. There are exceptions - leaks in the public interest, such as the Pentagon Papers, are supported. But give away the secret to the atom bomb, and...

- libel and slander. Speech that says something false about a person, and that as a result harms that person, is subject to sanctions and penalties.

- trade secrets. A person working for one company cannot go to another company and disclose the first company's trade secrets. Trade secrets are considered property, and discussing trade secrets is considered a theft of that property.

- copyright materials, and other materials related to trademarks. A person cannot say they represent Coca-Cola if they don't, and a person cannot publish Harry Potter under their own name.

- speech intended to sway or otherwise exploit the stock market. As Musk himself has learned, there's a range of things you can't say to promote your company. As Martha Stewart, conversation leading to insider trading is among them.

- fraud, misrepresentation, and false advertising. You cannot impersonate other people, or impersonate police officials, or make statements about objects for sale that are false (though there's a lot of latitude here, which is why 'nut free' products can contain nuts)

- direct threats or statements of intent to cause harm

- incitement to riot, or other speech known by the speaker to directly cause harm to other people

As I said, this is a wide swath of speech that is prohibited. 

Now it might be argued that the speech described above has in common the fact that it causes harm to some specific person or class of people. And I agree - the unifying purpose of these prohibitions is to prevent harm - either harm caused in the production of the speech or material, or harm caused as a result of the dissemination of the material.

As it happens, I also agree that we should limit speech in order to prevent harm. Though I find it interesting that the way we measure harm here is either direct personal injury or abuse, or loss of property. Nothing else matters.

I'm not sure whether these are right boundaries around freedom of speech. I'm not saying that any of these types of speech ought to be allowed. But I think that it shows that we ought to reframe the argument between free speech maximalists and the rest of society. It's not about maximum freedom of speech. It's about how we draw these boundaries.

I'll reframe it in two particular ways: by reconsidering harm, and reconsidering loss.

Harm: there are many ways to harm a person other than physically harming them, and it is in the definition of other harms where people disagree about that sort of speech. 

The most obvious of these is indirect harm, for example, setting into motion a chain of events where a particular outcome couldn't be predicted, but where some outcome against some person could be predicted. For example, if you toss a stone down a mountain, it's very unlikely you could hit a specific person, but also very likely that you would hit someone. And for that you should be liable.

That's how speech against an identifiable group works. It doesn't cause harm to any specific person, but the cumulative effect of such speech is going to harm someone at some time in the future. If you keep accusing blue people of being baby-killers, eventually someone is going to take out a gun and kill a blue person. 

There are other harms that are not physical harms. Creating a climate of fear, gaslighting, casting doubt on reality, and related 'speech' behaviour all have psychological consequences. They create an environment where a person is unable to enjoy their life (aka 'quiet enjoyment'). A lot of abuse of women is like that.

Loss: we can lose more than just property. As suggested above, we can lose peace of mind, security, and enjoyment of life. Losing things of value can include losing one's history, or culture, or religion. But loss can be more wide-ranging.

For example, because we live in a society, and value social interaction, loss of social standing or social interaction, perhaps by singling out or ostracism is clearly a harm. Conversely, receiving too much attention from society is also a harm, one created (for example) by doxxing or piling on. 

These are all just examples, and we would need to talk about the severity of the harm or loss, and how it can be attributable to any sort of speech. But it should be clear that these other kinds of loss and harm exist, and I think that society would be well justified in addressing them.

And that's where the debate on freedom of speech takes place: in what can be counted as a harm or loss, and in how speech has an impact on that.

Hence we'll see free speech maximalists complain because people are saying 'happy holidays' instead of 'merry Christmas'. They know that nobody is preventing them from saying 'merry Christmas', but unless everyone does it, they fear the loss of (their) religion in society.

And when we see 'free speech maximalists' argue that people of certain persuasions should not wear hats, or scarves, or kirpans, it's not because they think these are dangerous, it's because they fear their own society is being lost in a sea of multiculturalism. 

And when they oppose the idea of people being gay or trans or of indeterminate sexuality, they oppose this form of expression because the ideas they represent create offense, or even just a feeling of repugnance, a type of harm.

'Free speech maximalists' are people who, from my perspective, priorize their definitions of harm or of loss, which (fortunately) overlaps with some of ours, but which dismisses as irrelevant or trivial and sort of harm or loss they don't personally feel.

And that's why I'm not a free speech maximalist.

I am not personally impacted by misogyny, or racism, or by atheism, but I know people who are. And it matters to me that they are as able as I am to enjoy the benefits of living in a society. So I think we as a society should move to limit the harms - all the harms - of speech and expression.

Not all of that needs to be enshrined into law. Some of it is just common politeness, where the harm isn't severe, but still knowable. But the rest of it - the things that turn a social network - or a society - into an uninhabitable and hate-filled cesspool, these we can take action against. And I have no problem with that.

Image: New Statesman


 




Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Why Free Learning Needs Free Artificial Intelligence 4 Sep 2024 11:58 AM (7 months ago)

 


'Free' learning means two things to me: first, the idea that learning ought to be zero cost, and second, the idea that learning ought to be open and inclusive. Together these paint a picture of access to learning as part of the basic infrastructure of a society. 

We're not good, yet, at providing basic infrastructure. Most societies have a free and open road network along with access to basic security, including fire and police service. In Canada we also have free and open public education service for children up to Grade 12, as well as access to a range of essential health care services. Still, basic necessities like water, sewage, food, housing, electricity and information services are all things we pay for, with the result that too many go without. This is even more true on a global scale.

My focus is on education because it's where I ended up and what I know well. My understanding of 'education' and 'learning' in a society is far broader than schools, colleges and universities. Yes, these are essential institutions, but learning is a much broader concept, including workplace learning and personal learning. It is also a concept that ranges from access to information services, including news and data, through to access to assessment and entry into the marketplace. 

It takes a lot to provide these services. Buildings and infrastructure must be constructed and maintained, people need to create and provide access to learning resources and opportunities, people need time and space in their own lives to engage in learning activities, and the whole system must be coordinated and paid for. We spend 6.6 percent of our national GDP on education, or maybe 130 billion dollars, give or take, plus whatever we spend out of our own pockets for data and information services such as books, internet, and software.

It's a lot of money, and I don't consider the need for it to be self-evident. Learning isn't an end in itself for most people; it's a means to other ends, and it's these other ends that I am really working to support. I've tried to capture that in my own vision statement, expressing the idea that each of us ought to be able to live, grow and develop in our own way, to the maximum of our capacities, if we wish. It's an extension of John Stuart Mill's maxim that "the only freedom which deserves the name, is that of pursuing our own good in our own way", along with the Kantian idea that we should "treat humanity... at all times also as an end, and not only as a means."

Learning is a necessary, though not sufficient, condition for the realization of this vision. That said, a society in which learning is widely accessible and widely attained is one that is far more likely to achieve this vision, as people realize for themselves first, their own inherent good, and second, the inherent good of others. By 'inherent good' I mean here goodness in itself, and not measured against some external standard such as 'value' or 'worth' or other economically laden terms. 

Learning is not only resource-intensive, it is also difficult for the learner. We cannot, and probably never will, simply absorb knowledge through some sort of direct transfer. Learning is, as some people like to say, embodied, which means that there is no distinction between the informational content of our brain and body and their physical structure. I have described this embodiment by saying it is connectivist, that is, that knowledge is the connective neural structure in the brain, and that learning is the development of that structure. This means that knowledge is grown, not acquired, through a process of practice, experience and reflection.

In this way the development of a learning infrastructure is analogous to the development of an infrastructure for agriculture and human growth. We can't just focus on the provider 'giving' food to a person; an entire system needs to be contemplated, where transfer of food from producer to consumer is only one small step in a social system that grows the food and a personal system that uses the food to grow, a system that (interestingly) represents about 7 percent of Canada's GDP.

It is interesting that in the statistical accounts learning is represented as a cost while agriculture is represented as part of our national income, as though they were somehow distinct in nature. But neither is uniquely a form of income or expense; they may be either for any individual, and in a society, they are both, and globally, their status as income or expense balances out: the cost of learning is exactly what people earn from providing learning, while the cost of food is exactly what people earn providing food. 

So when I talk about 'free learning' there are two imperatives that much be considered: first, the real cost of providing access to learning to everybody, and second, the real income from learning that a large number of people depend on for their own livelihood. Supporting 'free learning' isn't simply a matter of reducing the cost of learning to zero, nor even of lowering the overall cost as a percentage of social cost and/or income.

It is a question of re-balancing; of providing access to all no matter what their income (or other demographic status) instead of providing more access to some and none at all to the rest. And it is, similarly, a question of providing access to (potential) earnings or revenue from learning to all, instead of providing a large revenue to some, and none at all to the rest. Both of these need to be in some sort of balance.

To put the matter more concretely: suppose we are concerned about access to education in Eswatini. Though the country spends 5 percent of its GDP on learning, its GDP is so low the result is that education is chronically underfunded, and many do without. Simply 'giving knowledge for free' is insufficient here. It's important that Eswatini develop its own learning economy, so that its capacity to generate income increases as well as its capacity to provide learning opportunities. To do otherwise is to further exaggerate the disparity between rich nations and poor nations.

This discussion is often conducted using the lexicon of 'colonialism'. In my previous post on this subject I defined colonialism loosely as follows: "Colonialism is (at least in part) the appropriation by one society of some other society or culture's productivity or wealth for its own gain. And it is (at least in part) the imposing of laws, values and cultural elements by one society onto another." 

This definition makes clear the idea that the imperative of 'free learning' isn't simply one of income and expenses. There are other more ineffable currencies that form a part of any national economy: its language(s), its system of laws, its culture and traditions, its social knowledge, its values and beliefs, to name a few. The practice of 'giving knowledge for free' is as harmful to these parts of a national economy as it is to the financial side of the economy.

When a nation can no longer develop its own learning economy, its language and culture (etc) are depressed. The learning provider extracts these from the economy without providing any return, and uses it for its own purposes, while at the same time earning an income developing learning resources that are then distributed (for 'free') to the recipient nation. The money - in the form of international aid, donations from grants and foundations, and even volunteer labour developing (say) software and content - is spent entirely in the 'donor' nation.

The development and use of artificial intelligence has made this process clear for all to see, which is why its use has resulted in a range of very reasonable objections, which again I discussed in my previous post. Its use results in a very apparent form of colonialism, in which a society's language(s),  system of laws, culture and traditions, social knowledge, values and beliefs, etc., are extracted, repackaged by the 'producer', and then sold back to us, in a form inevitably altered by the values and beliefs of those who produced it.

The only real difference between what we'll call 'AI colonialism' and 'Good Old Fashioned Colonialism' is in who is being colonized and who is doing the colonizing. In the case of GOFC, it was one nation colonizing another. In the case of AIC, it is one sector of the economy colonizing the rest. Though if we pause and consider for a bit we'll find it's not so different after all: in most societies, developed and otherwise, there is a structural colonialism, where one wealthier sector of society extracts value from the other, and then sells (or in the case of charity, 'gives') it back as a value-laden alternative.

I am so sympathetic with those who are opposing AI on these grounds, though my charity is extended only grudgingly to those who have only recently made the switch from colonizer to colonized. And my real loyalties are with those who have always been colonized - not only those in Eswatini (who have to their credit have resisted colonization better than many) but also those in my own society and those like mine, who contribute with their language(s), system of laws, culture and traditions, social knowledge, values and beliefs, etc., and find an educational system - and knowledge economy generally - sold back to them, inevitably changed by the values and beliefs of those who performed the appropriation.

This is an unsustainable model. Over time, it not only reduces the wealth of the subjected population, it also reduces the capacity of the provider (or 'donor') community generate wealth without these inputs (one imagines that a company like Disney would flounder without the privilege to incorporate and repurpose Arab or Indigenous culture and folklore). 

This is why, when I wrote for the OECD on sustainable open educational resources (OER), I wrote in favour of a community-based model. "The distinction between producers and consumers need to be collapsed. The use of a learning resource, through adaptation and repurposing, becomes the production of another resource." It's a model of a community producing learning resources for itself, rather than importing them at great cost in money and self-identity from a 'provider'.

As I said off the top of this article, we're not very good at providing this infrastructure. What I meant by that is that even before the arrival of artificial intelligence - long before, in fact - knowledge and learning resources were produced by extracting value from a community, repurposing it, and selling it back to that same community. What was removed was not only the community's own wealth, but even the very possibility of that community providing learning resources for itself.

It's a model created in two steps: first, defining a knowledge and information 'commons', and then applying some logic from Locke to convert it into property: "It being by him removed from the common state nature hath placed it in, it hath by this labour something annexed to it, that excludes the common right of other men." In other words: he who picks the apple gets to sell the apple. And thus the fruits of our society and culture, one by one, with inexhaustible patience, have been appropriated by private industry; and having amassed the lion's share of the world's wealth, they circle around, seeking to appropriate the last.

I am in favour, as are many others, of taxing this industry, to achieve a redistribution of wealth, restoring some balance to our society, which cannot continue in its current fashion. But a mere transfer of wealth will not address the underlying cause. The model of extraction and resale is itself what perpetuates the inability of a people to fend for itself - to feed itself, or to educate itself.

Working toward this end has been a long and arduous road, through pre-Open Source licenses in software like George Reese's Nightmare, through a 'freenet' in western Manitoba (called 'Westman Community Network' because someone trademarked the widely used term 'freenet'), through open educational resources (OER) and massive open online courses (MOOC), through local news cooperatives, to where we are today.

The purpose and intent was never to create a 'commons' that could be aggregated and exploited by commercial interests - that was the source of my longstanding debate with David Wiley. It was to create and enable mechanisms that helped a community build and support its own learning resource network that would not need to depend on proprietary journals, textbooks, or educational resources - and for that matter, to provide an alternative to proprietary and costly educational institutions. I always felt - and still feel - that with his emphasis on 'quality' and commercialization, Wiley's approach to OER was directed toward making the rich richer, even while it was cloaked in a facade of 'giving' learning resources to those less fortunate.

Indeed - and I've made this observation before - much of the existing structure of educational institutions is dedicated to preserving the fortunes of those already fortunate. It is indeed ironic that those who work for such institutions, and those who supported the academic publishing industry as editors and authors, are now finding their own work aggregated and exploited. What we are seeing with the rise of artificial intelligence isn't a new pattern. It is just the continuation of an existing pattern. 

And this is why it does no good to argue that 'AI must be regulated' or that 'AI must be blocked'. Whether or not AI succeeds as a technology is moot; neither blocking AI nor regulating the industry will alter the model of aggregation and exploitation that it exemplifies. The knowledge, learning and information industries will continue to exist, and with or without AI will continue to harvest community language(s), system of laws, culture and traditions, social knowledge, values and beliefs, etc., and in some fashion reshape them according to their own values and sell them back to the community.

Perhaps the elite status of the academic community would be saved, at least for now. But this does nothing to help the rest of us living in different communities around the world.

And this brings us back to what, to my mind, is the real purpose of open educational resources. They represent a means, mostly (though not exclusively) through digital technology, for a community to communicate with itself, to gather and share knowledge, to pass along its values and mores, its ideas and beliefs, and to be able to do this without reliance on external knowledge, information and learning providers.

We saw this in social media, before it was privatized and converted into a toxic mess. We saw that a community could create its own form of public journalism, reporting on and sometimes forming mass movements, breaking news, and (closer to my own sphere) educating and informing itself. Open Educational Resources - and then, later, Open Practices, became a form of learning empowerment

It - to me, at least - was never about giving people an education (or giving them rights, or freedoms or anything else). It was about people being able to create these things for itself.

Now a lot of my theoretical work over these years have been focused on the question of how this happens. My answer - and you don't have to agree with me on this to see the point - is that knowledge is produced through connection and interaction. A community creates its own wealth - not only financial but also social, cultural and intellectual - through the process of aggregating, remixing, repurposing and feeding forward. It's a system that works so long as every part of society implicated in the process benefits from the process.

This is, of course, a network theory of wealth - but, importantly, not a commons based theory of wealth. It is based, not on collaboration toward a single goal or entity or nation state, but on cooperation, each with the other. It's an ethos based not on extraction, but on sharing. And it is also a network theory of knowledge, culture, language - whatever you want, whatever is produced by a society, whatever is produced by a network of individual, autonomous and self-governing individuals working together.

To my mind, AI is just the latest version of such an entity. AI is based on the same principles that enable a society to create all these goods, and for an individual human mind to create them as well. This production is in all cases the product of learning, and learning is in all cases the product of practice, experience and reflection. 

Indeed, it is not a question of whether AI will 'continue to exist' or not. At its core, AI is nothing more than a set of mathematical principles that, when embedded in a physical system, can produce actionable knowledge. These mathematical principles aren't even all that complicated - it's not complexity that creates AI, it's scale (and, to a lesser degree, efficiency, so we can reduce the scale). These principles will continue to exist independently of a few companies, business models and economies. They will exist in individual humans - who will continue to be the means of production - and they will exist in societies. And probably, they will continue to exist in computers.

So now we turn - at last - to the subject of free artificial intelligence. For it is my belief that we are facing a question not of whether there will be artificial intelligence, but rather, who will own it. Will it become an instrument of colonization, or of liberation?

Over the last few months the Open Source Initiative (OSI), which created the 'open definition', has taken it upon itself to define Open Source AI (and the existence of this definition is one reason why I opted to use the title 'free AI' and not 'open AI'). It resembles in most respects a definition of free software, but with one major exception: limiting itself to free and open "information about the data used to train the system", but not the data itself.

The very content that is harvested and aggregated from society at large, whether via a common crawl or some other mechanism, will be the part of the 'Open AI' system that is proprietary. This preserves a model in which only one part of the social ecosystem owns the product of that ecosystem. The danger here isn't that some company or another will be able to produce the 'best' AI using proprietary data. The danger is that, by ensuring that the data are proprietary, they will be able to prevent anyone else from creating AI.

After all, the mechanism described by OSI, that "a skilled person can recreate a substantially equivalent system using the same or similar data," is a theoretical mechanism, that depends on having 'the same or similar data', which cannot be obtained in practice, if the AI company owns it all. And the more opponents of 'open AI' use copyright restrictions to argue against the aggregation of content by AI, the more they entrench the lockdown of AI enabled by the OSI definition.

The worst case scenario - and yet the one made most likely by most objections to AI extant today - is that only a small number of companies (which we can all name) are able to create and use useful AI models, and that our economy become dependent on paying (ever increasing) royalties and other fees to the use of what was, originally, our own knowledge. And if that happens, learning will never will be free, not by any meaning or definition of the term.

The only thing that protects open knowledge, open source and open learning is open data

Sure - we could just say we don't get to have open learning and the rest because the data used to train AI is de facto not open - it is owned, and the copyright held by the many people who have authored papers, articles, and Twitter posts makes it so. It would be unfortunate, because we could never then have a free society of the sort I envisioned at the top of this paper, but such a society just is incompatible with private property.

But I don't need to argue for the extermination of private property, and indeed, would find it difficult to imagine people enjoying life to the greatest extent possible without it. And the art or artifact genuinely produced through the fruits of one's own labour is, quite rightfully, theirs. 

But this does not entail stealing from the commons. There are two kinds of art and artifice in the world: that produced by individuals, and that produced by the community as a whole. And true, there is a whole lot of overlap between them. But I think we can make this distinction quite easily using language as an example.

There is, as Wittgenstein famously said, no such thing as a private language. This is not because it is impossible for a single individual to form their own words or grammar, but because the concept of a language precludes it being anything other than a tool for communication between at least two people. Language is produced by the community as a whole; nobody owns it, nobody owns the grammar, and nobody (with some exceptions) owns the words.

In broader society, language is much more than words and sentences; it is the full range of shapes, colours, sounds, memes, fads and fashions, gestures, whatever, that we use to communicate with each other, and within that communication, to express a body of knowledge about ourselves and the world.

This (in part) is what an artificial intelligence is harvesting when it aggregates data. It is not aggregating your 'content', it is aggregating the part you borrowed from the rest of society in order to get your idea across. The actual 'content' is very likely to be left behind, unless it stands out as a meme or widely used expression. Your claim to own that part of your content makes it legitimate for an AI company to say it owns that part of the content (as expressed in the (now) proprietary data it has collected. And this legitimizes the transfer of that part from the community to the company.

And we can't recreate it because it's the only community we have, and the knowledge that we could once share with each other in order to actually form a society is lost forever.

If we go back to it: the whole point of open educational resources should never have been to declare ownership, it should have been to enable sharing, through the recognition that this content, at least, is not for sale. This content, at least, belongs to the community, and its sole purpose and intent is to be used by the community, any community, to advance itself, to grow itself, and to express itself.

 


 


 



Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

What Is the Soul of Open Educational Resources? 27 Aug 2024 9:19 AM (7 months ago)


I have already written a short post in OLDaily responding to Heather Ross's post but I would like to expand a bit on it with a short post here.

The gist of her post is that the values of open education run directly contrary to the use of artificial intelligence in education.  "GenAI may be fun to play with and make some tasks easier," she writes, "but the cost to the values of open, the planet, marginalized groups, and humanity as a whole are far too great." I don't agree.

Ross begins by making several points about the use and value of open educational resources at her own institution (quoted):

All these are great benefits. To me they are a vindication for the work that I and other members of the OER community have undertaken over the last few decades.

Ross also makes the point that under-represented students in particular can benefit from OER (again, quoted): 

I have no disagreement with any of this. As a longtime supporter of diversity, equity and inclusion for all people on campus and off, I am particularly enthused to see OER helping these communities in their learning, social and cultural goals.

Where we begin to part ways is with the view expressed by David Wiley in some promotional text: "using generative AI is a demonstrably more powerful and effective way to increase access to educational opportunity."

I've had my disagreements with Wiley over the years but we are in agreement on this point. Now what it means to say "increase access to educational opportunity" may be another point of contention; creating startups and making money isn't my idea of progress. But we agree on the potential of AI. 

Here Heather Ross disagrees. "Everything I’ve learned about open, everything I’ve ever believed about what the OER movement stands for is the antithesis of what GenAI is and does."

Let's consider the argument.

First, she writes that "Open is about improving access to education and the lives of learners worldwide, not just for those in privileged countries or communities. GenAI is used to create papers and images for the privileged, harming many of the very people we’ve said open is trying to benefit."

In response, I would argue that, all else being equal, anything that makes the creation of learning resources, faster, cheaper and more effective is going to help people access education worldwide. I am very cognizant of the fact that billions of people have no internet access, let alone AI. But AI virtually eliminates a key bottleneck: the cost of content production. A reasonably educated person can create a logic text in a day, for example. How can that not improve access?

Yes, AI can be used by the privileged to create papers and images. So what? Rich people can also use roads, but that's no reason not to build roads. It makes no sense to me to say that the only use of a technology must be to provide greater access to less privileged people worldwide. 

Of course, I did say 'all else being equal, and if AI actually harms the people OER is trying to serve, then there is a good argument here. But this harm, to me, would need to be demonstrated. We can't just say 'AI is harmful' and leave it at that. So we'll put a marker here and come back to the question of harm as it arises.

Second, echoing no small number of people, she writes that "While open aligns with several of the SDGs, GenAI is an environmental nightmare, from the energy needed to run the growing number of servers to the immense amount of freshwater needed to cool them (by the way, the same is true for cryptocurrency)."

Nobody doubts that creating AI models takes a lot of power, though if I were more cynical I would argue that it is not alone in this regard. We need to keep AI in perspective. The global electricity consumption in 2022 was 24,398 terawatt-hours and rising. AI, meanwhile, is projected to consume 80-100 terawatt hours by 2027. That's more of a rounding error than anything else. No, the real problem is that we're still using hydrocarbons to generate this power.

If AI helps us move to wind, solar and other renewable energy sources, that alone will have been worth the investment. But AI also helps achieve other efficiencies. I ran an item in OLDaily the other day where "The average time to upgrade an application to Java 17 plummeted from what's typically 50 developer-days to just a few hours. We estimate this has saved us the equivalent of 4,500 developer-years of work (yes, that number is crazy but, real)." 

The environmental argument is a straw man. Nobody is seriously denying the seriousness of climate change or the need to stop dumping pollutants into the air. But we do more damage to the environment with our morning cups of coffee.

Third, Ross argues that "While open is being used to integrate EDI and Indigenization into curriculum, GenAI, programmed by those of dominant groups, often fails to represent or misrepresents members of marginalized communities."

Well, this is true. And it's true because the source data used to generate AI, consisting of books and text and images and other content found on the internet and elsewhere, also fails to represent or misrepresents members of marginalized communities. What I would say about the people who are creating AI is that at least they're trying, which is far more than can be said of most content industries around the world.

As Ross knows well, the misrepresentations described by Bali are perpetuated in global commercial media. They are also perpetuated by governments, social media, advertisers, and pretty much any other form of human communication. 

What would improve the representation of marginalized communities in AI is an increase in the contributions of these communities in the data being used to train AI. This is true of representation in general: if we want to hear from marginalized groups, they have to speak. Yes, we need to be listening for them, and yes, we need to amplify these voices. 

AI can help them with this. If it takes a fraction of the resources it used to take to create a useful and usable OER, even if it has to be corrected for misrepresentation, then there is far more opportunity for people in under-represented groups to crate resources where they see themselves reflected in the materials being used in learning. AI-assisted transcription and  translation, resource recommendation, community formation and more can also help members of marginalized groups.

It just boggles the mind to thing that all the positive impacts of AI would be tossed aside because large language models trained on Twitter produce sub-optimal text.

Fourth, Ross also raised the copyright argument, writing "While open has always called for recognition of the work’s creators and contributors and gratitude for their willingness to share it openly, any such gratitude toward GenAI-created work that was taught on copyrighted works against the copyright holder’s permission will ring hollow."

This is factually untrue. It is true that many people have called for the use of the attribution clause in CC licenses, the fact remains that it is not required, and that many have argued in favour of the CC-0 license instead. And while some people have felt that gratitude for open content ought to be expressed, many people (including myself) have argued against the idea that open content is a 'gift' that requires gratitude.

But it doesn't matter anyway. Although the cases are still before the courts, many AI proponents (including myself) argue that learning from content is not the same as copying content. For the most part (and there are exceptions) AI does not copy and reproduce content, it extracts statistically relevant regularities or patterns in the content, such as word order.

Moreover, for any given piece of content produced by AI, there may be hundreds or even tens of thousands of pieces of content implicated in its production. When an AI is trained to use word ordering like 'should be' and not 'be should', for example, it is learning form content. But it wouldn't make sense to 'credit' the authors of the content for this 'discovery'.

Fifth, Ross argues that AI is a form of colonization. "Taking what isn’t yours to create something new without giving credit, having permission, or considering the impact on others isn’t innovation or acting in the spirit of open. At the least it’s theft, at the worst, It’s colonization."

There are different arguments here, but I would argue that all content - not just AI-generated content - takes from someone or something else to produce something that's new. Indeed, I have argued in the past that this very fact is good reason to oppose strong copyright:  

in virtually every article, every post, there is more than a little reuse even of the expressions of ideas, much less the ideas themselves. It's not that I am saying that there is nothing original under the sun. But what I am saying is that there is far less that's original than the supposed originators would like to claim. It is in my view blatantly dishonest to slap a copyright label onto anything you have written unless you are quite sure you have checked and verified the original statement of every idea in your work. For otherwise, your claim to copyright is nothing less than theft

Now I know that not everyone agrees with me on this. But when copyright holders lay claim to things like the language itself then there is a great deal of overreach taking place.

But how is any of this colonialism? Ross explains: "Most OER was created by authors who willingly released their work with an open license. Napster was the sharing of music without the artist’s permission."

Now there is a good argument to be made here. Colonialism is (at least in part) the appropriation by one society of some other society or culture's productivity or wealth for its own gain. And it is (at least in part) the imposing of laws, values and cultural elements by one society onto another. And to the extent that artificial intelligence does this, there is a reason to argue against it. 

I think that's a a pretty hard argument to make. Certainly, we could argue that AI is no more or less colonial than any other industry created and promulgated by western capitalist industry. But that just makes AI a product of a system of global capitalism. 

That's not something that's going to be changed by opposing AI. That requires global political and economic reform. And, while I am in fact in favour of such reform, I would not start by attacking AI. In fact, I would be looking at how to use AI to change it. Just as I sought, since the 1990s, to use digital technology and open educational resources to change it.

There are a few people who have create a cottage industry for themselves by opposing every aspect of artificial intelligence. I think they're wrong, and have concerns about them misleading educators about AI. But Heather Ross's article takes it a step further.

This is colonialism:

"No, you don’t get to wash over or destroy the work we’ve done and the great work still to come within the open movement.” If those encouraging the use of GenAI for open or for GenAI to replace open want to play a new game, that’s fine. We can’t stop you, but get off our field."

It's not your field. 

To be clear: I have always welcomed people who promote diversity, equity and inclusion. I think things like accessibility are important, I think representation is important - that was one of the reasons I thought MOOCs could be so powerful. And yes - when MOOCs were taken over by capitalists and run onto the ground, I didn't stop creating MOOCs. I also believe that personal empowerment and democracy are important, and I want (in true Canadian fashion) people to be as free as they can be. All this is important to me. I've been very clear about my objectives for decades now. They align with, but are not the same as, the social justice movement. But I have never said, "You must agree with me on why we're creating OERs in order to be a part of the OER movement." Nor will I.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

My Top 10 Tools List for 2024 23 Aug 2024 10:07 AM (7 months ago)

This is my submission for this year to Jane Hart's annual list.

1. Firefox - this is my web browser of choice, the only one properly effective against tracking and advertising, and where I start and end every day

2. Mastodon - my social network of choice, which I use to I post OLDaily article and photos, and where I keep in touch with the learning technology community. I use the web-based version on the desktop, and Tusky on my phone

3. FreshRSS - this replaces Feedly, where the price got too expensive and the AI too predictable, as my primary feed reader. I manage it using Cloudron. I follow 800 feeds give or take; here's my OPML

4. YouTube - I do a lot of my personal learning over YouTube, not so much for tech so much as help with cycling and photography. I also use OBS webcast my work with new technologies as well as my gaming and cycling (I also have a PeerTube instance but it requires a lot of care and feeding).

5. Google Mail - while my office uses Outlook I'm using Google, partially because it simply works better, and partially for the convenience, though I'm looking to migrate to something I manage myself so I don't need to depend on it and mailing services like Mailgun for OLDaily.

6. Visual Studio Code - all my development work is done in VS Code, which offers me an excellent development environment as well as integration with GitHub. I haven't used CoPilot yet.

7. Google Docs - works really well as a cloud document authoring tool, and I need a cloud tool because I use several different computers in different places; I don't try to sync with Drive or OneDrive because they are unpredictable. I've tried alternatives OnlyOffice in NextCloud but it's slower and takes a lot more work.

8. Lightroom - this is unfortunately my go-to for image processing, mostly because I've never found an alternative that I can depend on. Don't point me to Gimp; I hate Gimp.

9. PowerPoint - I find it the easiest to use for producing slide presentations, and I also use it to create box-and-arrow type diagrams and simple graphic design. Again, my choice here is based on ease-of-use over alternatives like Google Slides or Open Office.

10. Zoom - my videoconferencing app of choice, though of course I use the full range of alternatives depending on what other people are using, and have my own (mostly unused) Jitsi instance.

Looking at this list it's worth reflecting that while some of it is open source, I use many commercial tools as well. This is by necessity, partially in order to be able to interoperate with other people, and partially because the commercial software is easier or has features I like.

What's really important, though, is that I don't depend on any commercial software. I have my own website, cloud environment, and content management system I developed myself, and this is where anything permanent goes. For me, using commercial software is like buying coffee from Starbuck's - I use it in passing, but I grind my own beans and make my own coffee at home.


 



Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

A 100-page Textbook on Logic. 17 Aug 2024 9:41 AM (7 months ago)

I was wondering whether ChatGPT could so it, so I asked it, "Please write me a 100-page textbook on logic." I'll reproduce what it produced below, but I want to observe, first, that yes, it could do this, but that it might take me all day to coax the relevant text for each section out of it, which is more time than I wanted to spend writing a logic textbook.

Here's the output from ChatGPT:


Creating a 100-page textbook on logic is a substantial task that requires careful planning and structuring. Here's an outline for such a textbook, followed by a sample excerpt to give you an idea of how it could be written. If you’re interested, I can create sections or expand on specific topics as needed.

Title: Foundations of Logic: An Introduction

Outline

Chapter 1: Introduction to Logic

1.1 What is Logic?
1.2 Importance of Logic
1.3 Types of Logic: Formal vs. Informal
1.4 Basic Terminology in Logic

Chapter 2: Propositional Logic

2.1 Propositions and Truth Values
2.2 Logical Connectives
2.3 Truth Tables
2.4 Tautologies, Contradictions, and Contingencies
2.5 Logical Equivalence
2.6 Laws of Propositional Logic
2.7 Arguments in Propositional Logic

Chapter 3: Predicate Logic

3.1 Introduction to Predicate Logic
3.2 Quantifiers: Universal and Existential
3.3 Translating Statements into Predicate Logic
3.4 Relations and Functions
3.5 Logical Equivalence in Predicate Logic
3.6 Formal Proofs in Predicate Logic

Chapter 4: Proof Techniques

4.1 Introduction to Proofs
4.2 Direct Proofs
4.3 Indirect Proofs: Proof by Contradiction
4.4 Proof by Contrapositive
4.5 Proof by Mathematical Induction
4.6 Common Logical Fallacies

Chapter 5: Set Theory and Logic

5.1 Basics of Set Theory
5.2 Sets and Propositions
5.3 Operations on Sets and Their Logical Equivalents
5.4 Venn Diagrams and Logic
5.5 Applications of Set Theory in Logic

Chapter 6: Modal Logic

6.1 Introduction to Modal Logic
6.2 Necessity and Possibility
6.3 Kripke Semantics
6.4 Applications of Modal Logic
6.5 Advanced Topics in Modal Logic

Chapter 7: Logic in Mathematics

7.1 Logic and Mathematical Proof
7.2 Logical Foundations of Mathematics
7.3 Gödel’s Incompleteness Theorems
7.4 Logic and Number Theory
7.5 Applications in Algebra and Geometry

Chapter 8: Computational Logic

8.1 Introduction to Computational Logic
8.2 Logic Programming: Prolog
8.3 Boolean Algebra
8.4 Automata Theory and Logic
8.5 Complexity Theory and Logic

Chapter 9: Logic in Philosophy

9.1 Classical Logic and Philosophical Arguments
9.2 Non-Classical Logics: Fuzzy Logic, Intuitionistic Logic
9.3 Logic and Epistemology
9.4 Logic and Metaphysics
9.5 Logic and Ethics

Chapter 10: Logic in Artificial Intelligence

10.1 Introduction to AI and Logic
10.2 Logical Agents and Knowledge Representation
10.3 Reasoning Systems
10.4 Logic and Machine Learning
10.5 The Future of Logic in AI

Chapter 11: Practical Applications of Logic

11.1 Logic in Computer Science
11.2 Logic in Law
11.3 Logic in Everyday Reasoning
11.4 Logic in Decision Making
11.5 Case Studies

Chapter 12: Advanced Topics in Logic

12.1 Non-Monotonic Logic
12.2 Paraconsistent Logic
12.3 Temporal Logic
12.4 Dynamic Logic
12.5 Contemporary Developments in Logic

Sample Excerpt

Chapter 2: Propositional Logic

2.1 Propositions and Truth Values

A proposition is a declarative sentence that is either true or false, but not both. Propositions form the basic building blocks of logical reasoning. For instance, the sentence "The sky is blue" is a proposition because it can be assigned a truth value: it is true if the sky is indeed blue at the time of observation.

In logic, we represent propositions using symbols, typically letters like PP, QQ, RR, etc. For example, let PP represent the proposition "The sky is blue." We can then analyze the truth value of PP under different conditions.

2.2 Logical Connectives

Logical connectives are symbols or words used to connect two or more propositions to form a new proposition. The most common logical connectives are:

2.3 Truth Tables

Truth tables are a systematic way to determine the truth value of a compound proposition based on the truth values of its components. Let’s construct a truth table for the proposition PQP \rightarrow Q.

 As seen from the table, the implication PQP \rightarrow Q is false only when PP is true and QQ is false.

This is a sample section that could be expanded to cover all chapters, each filled with explanations, examples, and exercises. If you'd like to focus on a specific part or want to explore more detailed sections, let me know!

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Why I Attribute 17 Jul 2024 11:52 AM (8 months ago)

Alan Levine gave me the old h/t (hat tip) for referencing a post recently on AI and copyright law. In a follow-up he recognized my own version of the h/t, the 'via' link, where I credit Clint Lalonde for the find. It makes for a nice neat chain:

Of course the article had an original source; it came from Canadian Press and was found on CP24, a Toronto news broadcasting station. And it had, after Levine, an eventual reader, who I will describe as 'You'. So in reality, we have an even longer chain:

Now because we all attributed our sources, you (that is, the 'you' referenced in the diagram) can trace the story back to the origin. Of course, you could just look up the URL of the link - but if that became obscured in so way, you could still create that chain.

We could say a lot about that chain.

For one thing, the existence of the chain suggests the possibility of some sort of blockchain that would make it possible to find the source even without an explicit attribution. That's pretty complex, though, and not really essential to any of the points I want to make here.

More importantly, this isn't just a chain of one person passing a link to another to another. All three of Lalonde, Downes and Levine added their own comments. By the time it got to you, you had not only the original story but also some interpretations or perspectives on that story.

Even more importantly, our passing it on represents some sort of endorsement. That doesn't mean we agree with the point of view, or that we're cheering about the result. It means each of us felt that this story was important enough to pass along. By the time you got it, the story had been vetted three times. 

We could call this a web of trust, in the sense that you trust Levine, Levine trusts me, and so on. But it really has nothing to do with trust, except perhaps in the very minimal sense that what we're passing along really originated somewhere; that we didn't just make it up.Again, having a blockchain would remove the need for that trust - but so does simply passing along the original URL at each stage.

Why is all this important?

It illustrates a fundamental difference between a federated social network (like Mastodon, where this chain originated, or web discussion boards, where it continued) and what I'll call a mass social network like Twitter or Facebook. There are three important differences.

First, in a federated social network, the story has to take several steps through a chain to reach you. Probably, you don't follow anyone in this chain but Levine. Because the federated social network is divided into different communities, the spread of something like this story is slowed down and doesn't get to you unless it's passed from community to community. 

But in a mass social network, it can jump straight from CP24 to you. The story spreads immediately. But there's nobody in between to vet it or add commentary. That's great for publishers, because they can reach a mass audience right away. It's great for advertisers and spammers and bad actors. But it removes any sort of vetting, critical reflection or feedback. You're on your own.

Second, even when the story gets to you via someone else on the mass social network (you might follow @Levine and saw a retweet from @CP24, say) the social network does as much as it can to obscure the chain. You don't see the actual URL, you see a link generated by the mass social network that may or may not take you directly to the source. Even though it looks like the chain is intact, the chain is broken.

And third, a lot of what you see on a mass social network isn't based on any sort of chain at all. Instead of receiving a link that was vetted by three reliable sources, you receive a link that was promoted by the algorithm. This allows the mass social network to serve you advertising, political messages, and whatever else it wants. And when someone sends you a retweet, they might never even have seen the original story, just a tweet they got from the algorithm.

But that's not all.

The federated social network would work the way it does even if nobody gave h/t or via references. The story you read from Levine would still have made three hops between the source and you. You would still benefit from the vetting, some (though not all) of the comments, and the reliability of receiving an original URL.

But what becomes obscured is the fact of the three hops, and the way the story made its way from community to community. And that's important.

Few people have only one follower. Most people have several. Some have thousands. Nobody has millions, as on mass social networks - there's an upper limit somewhere where too many followers doesn't really work on a federated social network (which is why we are very unlikely to see advertising-supported federated social networks).

These followers constitute a community. Each person in a federated social network has their own community, though there's also a wider sense of community where a bunch of people follow each other. There's no simple way to define these communities - just think of them as a group of people who might share a space, an interest, or just a friendship, and who message each other more or less often. 

To make things easy, I'm just going to line up Lalonde's community in a single line, like this:

It's represented a single line, L1...L4, but it's actually more like a pool, where the members are all talking back and forth. Lalonde sends things he gets to all his followers - to the members of his community, and to outsiders like Downes. 

This pattern repeats itself each step of the way:

And though the connectivity is less dense, there are other messages sent to other communities. So the actual interactivity looks a bit like this:

Of course, it's even messier than this - sometimes the messages skip layers, for example. CP24 sends messages to several members of the Lalonde community, and maybe even to the Levine community, and maybe even directly to you. We may all get the same message from various sources, through different chains, and we add up all their influences on us, to come to a final determination of how to regard the story from CP24.

And to go beyond our federated social networks, somewhere to the left of CP24, there might be a fact of the matter, reported on not only by CP24 but by various sources, official or otherwise, all of which are interpreted and (if deemed important) passed along until the different stories about that fact finally reach you. 

And here's the theory...

Getting the message in this way puts you in a much better position to assess the message, evaluate whether it is true, and decide whether to act on it (or pass it along to your own community).

Each community that the message passes through offers its own sort of filter. Each subsequent community regards that filter as both important and reliable. Just as visual signals are sent through several layers of processing in the visual cortex, so also social signals are sent through several layers of processing in the federated social network.

Image: http://webvision.instead-technologies.com/part-x-brain-visual-areas/9-1-primary-visual-cortex-by-matthew-schmolesky/

By contrast, a mass social network short circuits this entire process. There are no layers of filtering; it's like getting light signals dumped directly in the brain. It's like a kind of social sensory overload (note that this is an analogy - I'm not going to leap from this to saying 'social media causes anxiety' - that would be irresponsible).

Before social media, we would have expected that professional media would have handled the functions of the federated social network. News stories would be gathered from more than one source. There would be several layers of verification and editing before the story went to print. And people could read and compare accounts from a number of different sources. 

These functions, though, have all but disappeared from news media. Because news media depended on advertising, and therefore mass audiences, it came to value engagement over all. The layers of validation and verification were not only expensive overhead, they got in the way of prioritizing engagement. 

In academic media the same sort of mechanism was also in play, with the peer review process in place to ensure appropriate method was employed and that the authors preserved the progression of ideas through a social scientific network. 

As in the case of newspapers, there was a careful curation of scientific ideas, vetting, follow-up reviews, validation, and editing. This, too, has taken a poor second place to the prioritization of profit by the commercial press, and is something we may regain with (diamond) open access journals, though again, we're going to need to depend on a federated social scientific network to make this work.

And that's why I attribute.

While attribution may be morally good, the reason it's good is that it makes clear these layers of attribution.

The very fact of these layers, and the necessity of of having an article (or an idea, or a 'truth') pass through them, is what grounds our understanding of the world. Without them, it's all just noise.

And when I post links in my newsletter, I want it clearly understood by my readers that what I do is to serve as just one layer in this wider practice. I have my own system of selection, prioritization and commentary. And I think it serves the wider interests of society, even while these interests are being undermined elsewhere by mass media and mass social networks.








Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Some Notes on Intelligence 2 Jul 2024 8:23 AM (9 months ago)

 

Responding to Benjamin Riley, Modeling Minds (Human and Artificial): What are we talking about when we talk about intelligence? See also the discussion here.

In the paper on Modelling Minds Riley writes, " improving one’s knowledge within a specific domain generally does not significantly improve one’s reasoning abilities."

To the extent that this is true (and it is probably not true) it is true only in strictly limited circumstances. I would treat with a lot of scepticism the De Bruyckere, Kirschner and Hulshof paper purporting to make the case.

In the strictly limited case, the principal amounts to this: if you only learn A, you only learn A, and learning A will not help you learn B, even if both A and B involve some higher level skill, C. So (for example) if you only learn physics, which also involves learning mathematics, it will not help you learn engineering, which also involves learning mathematics.

It's hard to believe not learning mathematics while learning physics would not also help you learn engineering. The only way to make the argument work is to argue that, if you only learn physics, you do not actually learn any mathematics at all. That is, in a nutshell, you cannot generalize from a single instance, because (in principle) you don't know what parts of physics generalize, and what parts of physics are specific to physics.

But it is, in my view, a bad argument (a terrible argument, actually) because as soon as you start to learn engineering, you begin to recognize patterns that you already saw in physics. You see that the exact same shapes and operations are used (specifically, numbers and math functions). You don't actually need some higher level principle in order to obtain this benefit, you just need to recognize the pattern.

That's just as well, because the high level 'knowledge' stipulated by De Bruyckere, Kirschner and Hulshof are effectively meaningless as descriptions of any sort of knowledge. Here's what they are, in the order presented in the paper:
- creativity (chess)
- problem solving (computer programming)
- executive functions (music)
- better thinking (languages)

The authors go so far as to assert that these higher level 'skills' are innate and not learned. For example, they write, "Creativity is not a skill, and it cannot be taught or learned. Creativity is a quality or characteristic that a person possesses."



The diagram presented in the Riley article has a more plausible set of higher order skills: inference, planning and abstraction in the first instance, and the larger set of test-specific skills (above and below, center, clean up, etc...) later on. But it's interesting to see that learning these higher level skills is not impossible for LLMs, just difficult. As you note, "On the other hand…the fact that LLMs can solve them at all is impressive! Further, some might argue that in time we should expect them to get better at this type of thing."

The more solved examples of this sort you present to an LLM, the more nuanced the pattern recognition becomes, and the more likely it will solve the problem. Critically, it does not need to infer some higher level principle at work in the examples (much less any of the completely specious 'skills' described by De Bruyckere, Kirschner and Hulshof), it just has to detect the pattern. And this is what LLMs, and neural nets in general, are very good at.

Indeed, what these examples will show in general is that the difference between human and LLM performance (if, ultimately, there is any) will come down not to some presupposed inability of LLMs to acquire some putative 'innate' skills, but rather, the body previous experience (in other domains) that a human will have that an LLM won't. A human that has taken other tests in other domains will recognize the conventions around test-taking that an LLM will have to figure out for itself.

So the proper test isn't when learning A also results in someone (or something) learning C. The possibility that there even is a C won't be recognized until attempting to learn B. And previous experience in other domains will suggest that there is a possibility of being able to recognize similarities between A and B. And that's why it's helpful to teach mathematics as a separate subject. Not because of the content knowledge acquired. But because it eases the learning of B after having learned A by identifying aspects of C that might be relevant.

So what is 'intelligence' on this alternative account? Well, it's not a shopping list of 'higher order skills' - certainly not 'creativity, problem solving, executive function and better thinking', not 'inference, planning and abstraction', and not even 'above and below, center, clean up, etc...'. No. On this view, knowledge is recognition, and intelligence is the capacity to recognize, and this is most certainly something that can be learned or developed in a person, and this represents a cultural difference, generally, and not an inherent or genetic difference.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Perplexity on Connectivism 7 May 2024 6:53 AM (11 months ago)

This is a response from Perplexity, an AI engine that promises accuracy and cites real sources in its answer. The question was: "What is Connectivism". It's worth noting that while the response is reasonably accurate, the sources listed are not necessarily what we would call authoritative.

Connectivism
Theoretical framework for understanding learning in a digital age
 
Definition - Connectivism is a theoretical framework that emphasizes how internet technologies contribute to new avenues of learning, viewing knowledge as a network and learning as a process of pattern recognition.
 
Key Emphasis - Learning does not simply happen within an individual, but within and across networks. It focuses on connecting specialized information sets, and the connections that enable learning are more important than the current state of knowing.
 
Integration - Connectivism integrates principles related to chaos, network, complexity, and self-organization theories.
 
Technology's Effect - Connectivism emphasizes technology's effect on how people live, communicate, and learn.
 
Similarities - Connectivism has similarities with Vygotsky's zone of proximal development (ZPD) and Engeström's activity theory.
 
Connectivism is a learning theory that posits that knowledge and learning are distributed across a network of connections, and that learning occurs through the process of connecting nodes or information sources. This theory is particularly relevant in the digital age, where technology and the internet play significant roles in how we gather, process, and distribute information. 

Key Concepts of Connectivism

Networks and Connections: At the heart of connectivism is the idea that learning occurs within a network. This network consists of nodes (which can be people, data, or objects) that are connected by links. Learning happens when these connections are made and when the learner can effectively navigate through this network to access and use knowledge [2][3].

Learning as a Process of Connecting: According to connectivism, learning is not just about acquiring knowledge but about making connections. This includes connecting different ideas, fields, and concepts, as well as connecting with other learners and information sources. The theory emphasizes the importance of networks and the role of technology in creating and maintaining these connections [2][3].

The Role of Technology: Connectivism acknowledges the integral role of digital technologies such as the internet, social media, and other digital communication tools in facilitating learning. These technologies help to create and maintain the connections that are central to this learning theory [2].

Diversity and Collaboration: The theory values diversity of opinions and collaborative learning environments where learners can share and contrast their views. This diversity within a network is seen as a strength that enhances the learning process [1][3].

Adaptability and Currency: Connectivism places importance on the ability to stay current with information. The rapid pace of change in information technology means that what is true today may not be true tomorrow, so the ability to adapt and update one’s knowledge is crucial [2][3].

Decentralized Learning: Unlike traditional learning theories that may emphasize learning from a central authority, connectivism suggests that learning can occur outside of traditional educational settings. It can happen through a variety of channels and is not confined to an individual but can reside in non-human appliances and across various platforms [1][3].

Historical Context and Development

Connectivism was first introduced by George Siemens and Stephen Downes around 2004-2005. Their work highlighted the challenges that traditional educational theories faced in the digital age and proposed connectivism as a solution that considers the technological advancements and the interconnected nature of information. [2]

Principles of Connectivism
 

Some of the core principles of connectivism include:

    Learning and knowledge rest in a diversity of opinions.
    Learning is a process of connecting specialized nodes or information sources.
    Capacity to know more is more critical than what is currently known.
    Maintaining and nurturing connections is necessary for continual learning.
    The ability to see connections between fields, ideas, and concepts is a core skill.
    Decision-making is itself a learning process, influenced by the changing information landscape [2][3][6]

In summary, connectivism is a modern learning theory that emphasizes the role of technology in learning, the importance of networks, and the dynamic nature of knowledge in the digital age. It challenges traditional educational paradigms by promoting a decentralized, network-driven approach to learning.

Sources

Perplexity would not allow me to copy and paste the sources here. Here they are as displayed on the article:

1. Connectivism Learning Theory - WGU

2. Connectivism - Wikipedia

3. Connectivism Learning Theory - Educational Technology

Nine more sources are listed but not cited in the text.

 

 

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Response to Questions for Member States 26 Apr 2024 11:44 AM (11 months ago)

 

This is in response to a contribution to an OAS meeting distributed in my office this morning. It is of course my set of opinions only, and not reflective of any official policy or practice, though I would add that most of these have been undertaken to one degree or another by various levels of Canadian government organizations.

1. How can member states leverage connectivity and emerging technologies to bridge the digital divide, particularly in addressing the challenges faced by young people who are not connected to digital infrastructures? 

Three major avenues:

- Broadband and accessibility mandate – through financial support and public policy ensuring wired, fibre or wireless broadband access is available, including in rural and underserviced areas

- Education and development – support and resources for skills development (digital and data literacy, responsible use, etc.) both in-school and continuing education

- Deployment and use – provision of most major services online, support for institutions and industries to offer services online, programs and mechanisms to connect people to online services


2. How can the OAS Member States accelerate youth’s digital skills, readiness and certification to participate in the digital and innovation economy?


Pathways to digital skills:

- Develop and maintain an ongoing definition or outline of the major constitutents of online and digital literacy, taking into account new developments as they occur (such as the recent rise of AI) eg. The UNESCO definition of teacher and student digital literacies

- Develop and deliver ongoing programs for skills development, including an openeducational resource base and support for a digital literacy network and major partners (eg. Media Smarts) working in this area

- Develop or contract a skills certification metric and recognition program referencing skills development programs


3. What are the steps to increase employment and entrepreneurship paths for youth harnessing connectivity and emerging technologies?

Provide reference services for employment opportunities and support for alternative employment paths:

- Connect government-supported employment banks to a distributed network of employment reference service (employing, for example, ActivityPub standards for federation) and encourage through policy and support participation by private agencies (such as LinkedIn or Monster)

- Ensure public sector hiring proceeds through such a system and not through a separate parallel system

- Develop intelligent search services for employers to identify candidates; create a mechanism for prospective employees to market skills and make themselves visible to such services

- Through policy and support develop and sustain mechanisms for alternative employment mechanisms, including especially remote work opportunities for people living (and preferring to live) in remote and rural regions (it should be noted that remote work offers an unparalleled opportunity for remote and rural development)

- Support through policy and resources specific mechanisms for entrepreneurship and innovation connected directly to governance mechanisms in these areas, for example, online tool that supports small business creation directly (rather than merely instructions on how to do it)


4. How can governments ensure that digital literacy programs are effectively reaching young people, equipping them with the necessary skills to participate in the digital economy and fostering entrepreneurship and innovation?

Accurate and repeated assessment of digital literacy skills combined with ongoing study of actual behaviour in the marketplace:

- In-school assessment of digital literacy skills as part of regular and ongoing assessment processes (ideally included in, and not separate from, other assessments)

- Ongoing monitoring of public behaviour in social media environments, including both one-off studies (where participants are specifically enrolled) and longitudinal studies

- Ongoing assessment of online trends and practices generally

- Ongoing collection and assessment of self-managed economic activity through digital channels (this should include not only new enterprises but also eg. Content creation work on TikTok and YouTube, etc.)

- Regular reporting identifying where possible linkages between the development of digital skills and literacies and economic activities related to these, understanding that such relations will rarely be linear and predictable


5. What gender-responsive policies and initiatives can member states implement to address gender disparities in youth employment and digital inclusion?

Develop, highlight and normalize gender inclusivity in all digital technology:

- Support for gender-specific technology initiatives (eg., ‘Women who Code’, which recently suspended operations due to lack of funding)

- Consistent with previous recommendations, support through resources and policy alternative employment strategies to enable women in various stages of child support to participate equally in the marketplace

- Ensure representation of women in media and programs related to digital technology (eg., in advertising material, public profiles, conference participation, etc.) and ensure that these role models are visible and normalized for children and youth considering careers in digital technology

- Support through policy and resources content evaluation and moderating services, including especially content used to train automatic services, to ensure fair and accurate representations of women in the technological workplace

- Support through policy and resources similar programs for other under-represented groups; in Canada this especially applies to support for Indigenous communities as well as minority language support, support for immigrant communities, and support for visible minorities


6. How can member states use the findings and recommendations from this discussion to inform the development of a resolution or declaration that prioritizes connectivity and emerging technologies in the fight against youth unemployment as a component of resilient socioeconomic development across the Americas?

Recognize and appreciate the linkage between connectivity and emerging technologies in the fight against youth unemployment and sustainable development generally

- Sustainable development is possible only to the extent that the population as a whole is able to participate and benefit in that development

- To create that participation, support is necessary to prepare both recipients for participation, as well as to create channels for that participation 

I want to add as one final remark, that this isn't a menu of options that governments can select from; from points one through six it represents a system of responses such that, if one part of the response is missing, the system as a whole is impaired. 

Title Image: Pew.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Text Editing in Ubuntu Cloud Environment Within a Powershell Environment 27 Mar 2024 7:33 AM (last year)


Sorry about the headline; it's mostly a bunch of keywords to help it be found when I need it again.

The Problem

I have an Ubuntu container in a cloud environment managed by Virtuozzo (on Reclaim Cloud). I access it either using the built-in SSH Console or by using ssh within PowerShell. I need to set the crontab, which means I need to use the built-in editor in the Ubuntu environment.

The problem is that the Ubuntu has minimal text-editing capabilities, and when I try to edit either way I just get a blank or janky screen in which nothing appears to work. I tried changing the editor using 'select-editor' but it displayed nothing. So I couldn't edit crontab.

The Solution

Rebuilding terminal editing support in the Ubuntu container.

First, logged in as root, I installed vim, using the following commands:

    apt-get update

    apt-get install vim

The first refreshed the apt-get function, and the second does the actual install. Instructions here.

Then, I ran select-editor:

    select-editor

and this time it gave me options. I selected vim.basic. Instructions here.

The other option is nano. Maybe there's a way to configure nano so it also works, but I don't know what it is. This wasn't enough, however. It opens the file OK, but the arrow keys don't work at all. I need to set the terminal type. 

I go to my home directory - for me, working as root, it was /root but for you it will be your user home directory. Change to your home directory, whatever it is:

    cd /~

Then create a .vimrc file

    touch .vimrc

Then open the editor to that file:

    vi .vimrc

Now you're editing the file. But your editor doesn't work! We'll set the terminal inside the editor:

         press 'escape'

type:

    :set term=builtin_ansi

and hit enter. You should see the display of your file change. Instructions here.

Now you can edit your .vimrc file. 

type 

        i

to insert text (here are all the instructions for using vi, in case it's new to you). Then insert the following text:

    set term=builtin_ansi

and press enter.

To save the file in vi, type 'escape', then:

    :wq

which writes and quits. Instructions here.

Now vi will work properly and you can edit the crontab. Use your arrow keys to move your cursor, the i command to insert text, and esc when you're done and want to save or move the cursor again.

While I'm here, here's a great set of instructions for testing and setting up cron.


 

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?