A dashing wizard

Jul. 18th, 2025 09:12 am
[syndicated profile] languagelog_feed

Posted by Mark Liberman

From Jesse Sheidlower:

I hereby offer to supervise an MA thesis focused entirely on this one passage.

#linguistics

[image or embed]

— Jesse Sheidlower (@jessesword.com) July 17, 2025 at 2:02 PM

The cited passage is from Terry Prachett's 1987 novel Mort.

Here's the context:

    Three men had appeared behind him, as though extruded from the stonework. They had the heavy, stolid look of those thugs whose appearance in any narrative means that it’s time for the hero to be menaced a bit, although not too much, because it’s also obvious that they’re going to be horribly surprised.
     They were leering. They were good at it.
     One of them had drawn a knife, which he waved in little circles in the air. He advanced slowly towards Mort, while the other two hung back to provide immoral support.
     “Give us the money,” he rasped.

After some back-and-forth:

     “I think we kill you and take a chance on the money,” he said. “We don’t want this sort of thing to spread.”
     The other two drew their knives.
     Mort swallowed. “This could be unwise,” he said.
     “Why?”
     “Well, I won’t like it, for one.”
     “You’re not supposed to like it, you’re supposed to—die,” said the thief, advancing.
     “I don’t think I’m due to die,” said Mort, backing away. “I’m sure I would have been told.”
     “Yeah,” said the thief, who was getting fed up with this. “Yeah, well, you have been, haven’t you? Great steaming elephant turds!”
     Mort had just stepped backwards again. Through a wall.
     The leading thief glared at the solid stone that had swallowed Mort, and then threw down his knife.
     “Well, —- me,” he said. “A —-ing wizard. I hate —-ing wizards!”
     “You shouldn’t —- them, then,” muttered one of his henchmen, effortlessly pronouncing a row of dashes.
     The third member of the trio, who was a little slow of thinking, said, “Here, he walked through the wall!”

One quasi-linguistic note, for anyone who takes Jesse up on his offer: I presume that the image in Jesse's skeet comes from a printed book, because the Kindle version (inappropriately) eliminates the spaces corresponding to the boundaries of the bleeped words:

That's a typographical convention that annoys me when it eliminates spaces next to punctuational dashes. In Jesse's image, there are spaces on both sides of all of the dashes, except after the ones preceding "ing". That also strikes me as inappropriate to context — in the text reproduced above, I've added spaces around each bleeped word, but not between the intra-word letter-bleeping dashes.

Another linguistic question is how the readers of the Audible audiobook version render the dashes. However, I'm not willing to spend $23.24 to learn the answer (or even the special Audible-member price of $10.49), since my master's thesis days are long past.

In related news, there's a new-ish edition of The F-Word ….

 

[syndicated profile] languagelog_feed

Posted by Victor Mair

The Weird Way Language Affects Our Sense of Time and Space
The languages we speak can have a surprising impact on the way we think about the world and even how we move through it.
Matt Warren and Miriam Frankel
This post originally appeared on BBC Future and was published November 4, 2022. This article is republished here (getpocket, Solo) with permission.

When I first scanned this article, I thought it was so lackluster, especially on contentious waters that we had successfully navigated just a few weeks ago (see "Selected readings"), I decided not to write about it on Language Log.  However, several colleagues called the article to my attention and said that it raised interesting questions, so I have gone ahead and posted on it despite my reservations.

Cognitive scientist Lera Boroditsky, one of the pioneers of research into how language manipulates our thoughts, has shown that English speakers typically view time as a horizontal line. They might move meetings forward or push deadlines back. They also tend to view time as travelling from left to right, most likely in line with how you are reading the text on this page or the way the English language is written.

This relationship to the direction text is written and time appears to apply in other languages too. Hebrew speakers, for example, who read and write from right to left, picture time as following the same path as their text. If you asked a Hebrew speaker to place photos on a timeline, they would most likely start from the right with the oldest images and then locate more recent ones to the left. 

Mandarin speakers, meanwhile, often envision time as a vertical line, where up represents the past and down the future. For example, they use the word xia ("down") when talking about future events, so that "next week" literally becomes "down week". As with English and Hebrew, this is also in line with how Mandarin traditionally was written and read – with lines running vertically, from the top of the page to the bottom.

So much for monolinguals.

Things start to get really strange, however, when looking at what happens in the minds of people who speak more than one language fluently. "With bilinguals, you are literally looking at two different languages in the same mind," explains Panos Athanasopoulos, a linguist at Lancaster University in the UK. "This means that you can establish a causal role of language on cognition, if you find that the same individual changes their behaviour when the language context changes."

Bilingual Mandarin and English speakers living in Singapore also showed a preference for left to right mental time mapping over right to left mental mapping. But amazingly, this group was also quicker to react to future oriented pictures if the future button was located below the past button – in line with Mandarin. Indeed, this also suggests that bilinguals may have two different views of time's direction – particularly if they learn both languages from an early age. 

One of the most discussed Whorfian topics on Language Log has to do with grammar and economics.

In 2013, Keith Chen, a behavioural economist at the University of California, Los Angeles, set out to test whether people who speak languages that are "futureless" might feel closer to the future than those who speak other languages. For example, German, Chinese, Japanese, Dutch and the Scandinavian languages have no linguistic barrier between the present and the future, while "futured languages", such as English, French, Italian, Spanish and Greek, encourage speakers to view the future as something separate from the present.

He discovered that speakers of futureless languages were more likely to engage in future-focused activities. They were 31% more likely to have put money into savings in any given year and had accumulated 39% more wealth by retirement. They were also 24% less likely to smoke, 29% more likely to be physically active, and 13% less likely to be medically obese. This result held even when controlling for factors such as socioeconomic status and religion. In fact, OECD countries (the group of industrialised nations) with futureless languages save on average 5% more of their GDP per year.

This correlation may sound like a fluke, with complex historical and political reasons perhaps being the real drivers. But Chen has since investigated whether variables such as culture or how languages are related could be influencing the results. When he accounted for these factors, the correlation was weaker – but nevertheless held in most cases. "The hypothesis still seems surprisingly robust to me," argues Chen. 

Despite all of their enthusiastic debates over whether some languages can make us wealthy and healthy and other languages make us poor and perilous, linguists are still arguing over whether the language we speak can leave us successful in business and robust (!) in life.  I wonder, though, whether the question has been properly phrased, and what Benjamin Lee Whorf himself would say of the economic claims that are being made on his behalf.

 

Selected readings

[Thanks to Mark Metcalf and Richard Warmington]

Weekend Events Starting Tomorrow!

Jul. 17th, 2025 01:50 pm
[syndicated profile] vintage_ads_feed

Posted by misstia

18-20 Friday-Sunday Weekend Events: Famous Disabled People (MS, Parkinsons, etc are disabilities) and 'Down on the Farm' so anything farm related and ads from 1933

Ads that include famous disabled people (ie: MS, Parkinsons, & mental illnesses are disabilities too)

AND

'Down on the Farm' so anything farm related: ads for tractors, ads showing fields of crops, ads involving farm animals in anyway (so yes Bordon ads are fine), etc

AND

Ads from 1933, cuz um, looks around America, yeah...ads from 1933
[syndicated profile] languagelog_feed

Posted by Mark Liberman

In a comment on "Alignment", Sniffnoy wrote:

At least as far as I'm aware, the application of "alignment" to AI comes from Eliezer Yudkowsky or at least someone in his circles. He used to speak of "friendly AI" and "unfriendly AI". However, the meaning of these terms was fairly different from the plain meaning, which confused people. So at some point he switched to talking about "aligned" or "unaligned" AI.

This is certainly true — see e.g. Yudkowsky's 2016 essay "The AI alignment problem: why it is hard, and where to start".

However, an (almost?) exactly parallel usage was established in the sociological literature, more than half a century earlier, as discussed in Randall Stokes and John Hewitt, "Aligning actions" (1976):

A substantial body of literature has been developed within the symbolic interactionist tradition that focuses upon various tactics, ploys, methods, procedures and techniques found in social interaction under those circumstances where some feature of a situation is problematic. Mills' (1940) concept of motive talk, Scott and Lyman's (1968) discussion of accounts, Hewitt and Hall's (1973) and Hall and Hewitt's (1970) quasi-theorists, and Hewitt and Stokes' (1975) disclaimers are among the contributions to this literature. In addition, some of Goffman's work (1959; 1967; 1971) addresses itself to a similar set of issues, and McHugh's (1968) analysis of the concept of the definition of the situation is pertinent to the question of how people deal with problematic occurrences.

We refer to these phenomena collectively as aligning actions. Largely verbal efforts to restore or assure meaningful interaction in the face of problematic situations of one kind or another, activities such as disclaiming, requesting and giving accounts, constructing quasi-theoretical explanations of problematic situations, offering apologies, formulating the definition of a situation, and talking about motives illustrate a dual process of alignment. First, such activities are crucial to the process in which people create and sustain joint action by aligning individual lines of conduct when obstacles arise in its path. Second, and of particular import for the present analysis, aligning actions can be shown to play a major part in sustaining a relationship between culture and conduct, in maintaining an alignment between the two in the face of actions that depart from cultural expectations or definitions of what is situationally appropriate.

More from later in the paper:

Much, though not all, that is problematic in everyday life can be conceived in terms of a metaphor of alignment, a term that has a double meaning in the present analysis. First, alignment is a central metaphor in the interactionist analysis of conduct formation. Social interaction is con- ceived as a process in which people orient their conduct toward one another and toward a common set of objects. In this mutual orientation of conduct, an effort is made by participants to align their indi- vidual acts, one to another, in the creation of joint or social acts. 

[…]

The second meaning of alignment — and in the present essay the more crucial one  — revolves around the fact that problematic situations often involve misalignment between the actual or intended acts of participants and cultural ideals, expectations, beliefs, knowledge, and the like. "Alignment" in this sense has to do with perceived discrepancies between what is actually taking place in a given situation and what is thought to be typical, normatively expected, probable, desirable or, in other respects, more in accord with what is culturally normal.

That second sense is exactly what is now meant by alignment in the "AI alignment" discussion, or so it seems to me.

Yudkowsky's 2016 essay doesn't cite the sociological usage, and there's no bibliography to check — according to footnote 1 , "This document is a complete transcript of a talk that Eliezer Yudkowsky gave at Stanford University for the 26th Annual Symbolic Systems Distinguished Speaker series". I don't find a reference in a quick scan of his other publications  either, so presumably he perceived the term as just a normal part of the language of intellectual discourse.

Also unclear to me is the connection between the sociologists' alignment and the D&D version.

But anyhow, as the earlier post noted, "alignment, like journey, is an old word that has been finding new meanings and broader uses over the past few decades".

Tracks

Jul. 16th, 2025 02:38 pm
[syndicated profile] languagelog_feed

Posted by Mark Liberman

In a comment on "Alignment", Bob Ladd wrote:

I was also curious about "track" in the announcement quoted in the OP. I don't think I've ever been to a conference where you can focus on a specific "track". Is this a tech thing? An AI thing? Or have I just not been paying attention?

The portion of the AAAI-26 page in question [emphasis added]:

AAAI-26 is pleased to announce a special track focused on AI Alignment.

Similar language can be found in the pages for AAAI-25:

AAAI-25 will feature technical paper presentations, special tracks, invited speakers, workshops, tutorials, poster sessions, senior member presentations, competitions, and exhibit programs, and a range of other activities to be announced.

And the same sentence in the page for  AAAI-24:

AAAI-24 will feature technical paper presentations, special tracks, invited speakers, workshops, tutorials, poster sessions, senior member presentations, competitions, and exhibit programs, and a range of other activities to be announced.

A similar usage can be found in the announcements for "Special Sessions" at Interspeech 2024 and Interspeech 2025:

Inaugurated for Interspeech 2024, the BLUE SKY track will again be open for submission this year. The Technical Program Chairs would like to encourage authors to consider submitting to this track of highly innovative papers with strong theoretical or conceptual justification in fields or directions that have not yet been explored. Large-scale experimental evaluation will not be required for papers in this track. Incremental work will not be accepted. If you are an 'out-of-the-box' thinker, who gets inspiration from high-risk, strange, unusual or unexpected ideas/directions that go purposefully against the mainstream topics and established research paradigms — please consider submitting a paper on this challenging and competitive track! Who knows you might launch the next scientific revolution in the speech field? Please note that to achieve the objectives of this BLUE SKY track, we will ask the most experienced reviewers (mainly our ISCA Fellow members) to assess the proposals.

Like may similar conferences, IEEE ICASSP 2025 has an "Industry Track". Here's a similar list from ACL 2025.

And back in 2013, the IEEE published a page on "Conference tracks" in the "2013 7th IEEE International Conference on Digital Ecosystems and Technologies (DEST)", which lists tracks A ("foundations of digital ecosystems and complex environment engineering") through K ("Big data ecosystems").

So without further delving, we can conclude that "track" has been widely used for a while to mean a set of conference presentations that are temporally and spatially diffuse, but topically coherent. This is  useful for participants finding their way through multiple parallel sessions, and (at least sometimes) it also plays a role in the refereeing of submission.

The cultural orbit of this usage is not clear to me — I don't see it in materials for LSA or MLA meetings, but it's certainly common in conferences like AAAI, Interspeech, IEEE, ACL, and so on. Before thinking about Bob's question, it never occurred to me that it was not a natural and universal usage.

 

 

 

2004 Adidas

Jul. 16th, 2025 01:35 pm
[syndicated profile] vintage_ads_feed

Posted by delanotooke

2004 Adidas.png


Fauja Singh was killed by a hit-and-run driver earlier this week as he strolled through his native village in India.  More on his remarkable life here.

Alignment

Jul. 15th, 2025 07:57 pm
[syndicated profile] languagelog_feed

Posted by Mark Liberman

In today's email there was a message from AAAI 2026 that included a "Call for the Special Track on AI Alignment""

AAAI-26 is pleased to announce a special track focused on AI Alignment. This track recognizes that as we begin to build more and more capable AI systems, it becomes crucial to ensure that the goals and actions of such systems are aligned with human values. To accomplish this, we need to understand the risks of these systems and research methods to mitigate these risks. The track covers many different aspects of AI Alignment, including but not limited to the following topics:

  • Value alignment and reward modeling: How do we accurately model a diverse set of human preferences, and ensure that AI systems are aligned to these same preferences?
  • Scalable oversight and control: How can we effectively supervise, monitor and control increasingly capable AI systems? How do we ensure that such systems behave according to predefined safety considerations?
  • Robustness and security: How do we create AI systems that work well in new or adversarial environments, including scenarios where a malicious actor is intentionally attempting to misuse the system?
  • Interpretability: How can we understand and explain the operations of AI models to a diverse set of stakeholders in a transparent and methodical manner?
  • Governance: How do we put in place policies and regulations that manage the development and deployment of AI models to ensure broad societal benefits and fairly distributed societal risks?
  • Superintelligence: How can we control and monitor systems that may, in some respects, surpass human intelligence and capabilities?
  • Evaluation: How can we evaluate the safety of models and the effectiveness of various alignment techniques, including both technical and human-centered approaches?
  • Participation: How can we actively engage impacted individuals and communities in shaping the set of values to which AI systems align?

This reminded me of my participation a few months ago in the advisory committee for "ARIA: Aligning Research to Impact Autism", which was one of the four initiatives of the "Coalition for Aligning Science".

Alignment, like journey, is an old word that has been finding new meanings and broader uses over the past few decades. I suspect a role for Dungeons & Dragons, which has been impacting broader culture in many ways since the 1970s:

In the Dungeons & Dragons (D&D) fantasy role-playing game, alignment is a categorization of the ethical and moral perspective of player characters, non-player characters, and creatures.

Most versions of the game feature a system in which players make two choices for characters. One is the character's views on "law" versus "chaos", the other on "good" versus "evil". The two axes, along with "neutral" in the middle, allow for nine alignments in combination. […]

The original version of D&D (1974) allowed players to choose among three alignments when creating a character: lawful, implying honor and respect for society's rules; chaotic, implying rebelliousness and individualism; and neutral, seeking a balance between the extremes.

In 1976, Gary Gygax published an article title "The Meaning of Law and Chaos in Dungeons and Dragons and Their Relationships to Good and Evil" in The Strategic Review Volume 2, issue 1, that introduced a second axis of good, implying altruism and respect for life, versus evil, implying selfishness and no respect for life. The 1977 release of the Dungeons & Dragons Basic Set incorporated this model. As with the law-versus-chaos axis, a neutral position exists between the extremes. Characters and creatures could be lawful and evil at the same time (such as a tyrant), or chaotic but good (such as Robin Hood).

For some metaphorical extensions, see "Alignment charts and other low-dimensional visualizations", 1/7/2020.

A quick scan of Google Research results shows a steady increase in references including the word alignment, though 2014 or so. (I've included counts for the word results to check for general corpus-size increases).

  YEARS   ALIGNMENT RESULTS  RATIO
1970-1974   19000   200000   10.53
1975-1979   31700   350000   11.04
1980-1984   56900   355000    6.24 
1985-1989  119999   305000    2.54 
1990-1994  207000   362000    1.75 
1995-1999  363000   546000    1.50
2000-2004  644000   799000    1.24 
2005-2009 1080000   856000    0.79 
2010-2014 1220000   760000    0.62 
2015-2019 1200000  1260000    1.05 
2020-2024  967000  1800000    1.86

And a graphical version:

It would be interesting to track the evolution, across the decades in various cultural areas, of meaning and sentiment for alignment and aligning.

[syndicated profile] vintage_ads_feed

Posted by delanotooke

1963 Thrill.png



Interesting product name.  I can think of few things less thrilling than a sink full of dirty dishes.  
[syndicated profile] vintage_ads_feed

Posted by delanotooke

1925 Paramount Pictures 2.jpg


A slew of ornate movie theaters were built in the twenties, including several in Louisville.  There's only one left standing here now - The Louisville Palace - now fully restored and used primarily as a concert venue.  It is stunning.

Profile

nundinae: michiru, mirror (Default)
nundinae

August 2010

S M T W T F S
1234567
8 91011121314
15161718192021
22232425262728
293031    

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Powered by Dreamwidth Studios