The objects of linguistic investigation can be fruitfully compared just to the extent they are (components of) human languages. Their diversity is constrained by specifically linguistic universals, but also, and less trivially so, by other, more general human universals. Looking into both cultural and biological forces that support and limit linguistic diversity this working group focusses on precisely determining the place of language and languages among the characteristics of our species.
Language comparison presupposes comparability, and this in turn presupposes the common denominator of definitional universals. The idea of this workshop is to look both within and beyond the field of linguistics to find out about the underpinnings of linguistic universals, both of the definitional variety (What makes the cluster of phenomena defined by the notion of language coherent?) as well as and especially of the empirical kind (Which non-definitional features cluster around the definitional properties and why?). To do this it is necessary to determine the place of linguistic universals among the human universals (Brown 1991). Since the latter concern both the human body with its brain and mind, and the cultures and societies it lives in (Enfield and Levinson 2006), contributions are invited from all relevant fields: biology, neuroscience, cognitive science, anthropology, sociology, philosophy and last, but not least, linguistics. Chomsky (2004) is certainly right in assuming that genetic endowment, experience, and language independent principles of efficient computation contribute to language development in the individual, but it is rather controversial (a) what the genetic endowment consists of, (b) how these factors interact in the individual, and (c) how the individual mind participates in the shared, i.e. distributed and collective, mind. Handedness is certainly part of our genetic endowment, and so Krifka's (2006) proposal that it might motivate the universal availability of topic-comment structuring is an excellent example of the kind of phenomena this workshop is intended to collect and relate to one another. Cultural constraints on grammar as discussed by Everett (2005) are another case in point. Ideally, the workshop will draw together contributions from different fields to document the state of the knowledge in this domain and instigate progress towards a more and more complete picture of the ways human universals shape human language.
Brown, Donald E. (1991): Human universals. New York: McGraw-Hill.
Chomsky, Noam (2004): Biolinguistics and the Human Capacity. Lecture MTA Budapest, May 17.
Enfield, N. J. / Stephen C. Levinson (eds.) (2006): Roots of human sociality: culture, cognition, and interaction. Oxford: Berg.
Everett, Daniel L. (2005): Cultural constraints on grammar and cognition in Pirahã. In: Current Anthropology 46: 621-46.
Haun, D.B.M., Call, J., Janzen, G., Levinson, S.C. (2006). Evolutionary psychology of spatial representations in the Hominidae. Current Biology, 16, 1736-1740.
Krifka, Manfred (2006): Functional similarities between bimanual coordination and topic / comment structure. In: Ishihara, S. / Schmitz, M. / Schwarz, A. (eds.): Interdisciplinary Studies on Information Structure 08, Potsdam.
Levinson, S. 2006. Cognition at the heart of human interaction. Special issue of Discourse Studies: Discourse, interaction and cognition, 8(1), 85-93.
Cognition at the heart of human interaction. Special issue of Discourse Studies: Discourse, interaction and cognition, 8(1), 85-93.
Sperber, D., and L. Hirschfeld. 2004. The cognitive foundations of cultural stability and diversity. Trends in Cognitive Sciences 8(1):40-46
Tomasello, M. (2004). What kind of evidence could refute the UG hypothesis? Studies in Language, 28, 642-44.
|14:00||David Poeppel and Dietmar Zaefferer: Welcome|
|14:05||Dietmar Zaefferer: Definitional and empirical features of human and language (Abstract)|
|14:30||Christoph Antweiler: The many determinants of human universals (Abstract)|
|15:00||Peter J. Richerson: Patterns of human conflict and cooperation: Language and linguistic diversity (Abstract)|
|16:30||Rainer Dietrich, Werner Sommer, Chung Shan Kao: The influence of syntactic structures on the time course of microplanning. A crosslinguistic experiment (Abstract)|
|17:00||Adriana Hanulíková, James M. McQueen, and Holger Mitterer: The differing role of consonants and vowels in word recognition. A universal principle? (Abstract)|
|17:30||Michael Ullman: Variability and redundancy in the neurocognition of language (Abstract)|
|18:30||End of session|
|09:00||Andrew Nevins: Invariant properties of human grammatical systems (Abstract)|
|09:30||Joana Rosselló: Duality of patterning in the architecture of language: a reassessment (Abstract)|
|10:00||Ljiljana Progovac: The Core Universal and Language Evolution (Abstract)|
|10:30||Wolfram Hinzen, Boban Arsenijevic: Recursion as an epiphenomenon (Abstract)|
|11:30||Hedde Zeijlstra: Parameters are epiphenomena of grammatical architecture (Abstract)|
|12:00||Jeffrey Lidz: The abstract nature of syntactic inference in language acquisition (Abstract)|
|13:00||End of session|
|09:00||Asifa Majid: Constraints on Event Semantics across Languages (Abstract)|
|09:30||Friedemann Pulvermüller: Brain constraints on language universals: On the neural basis of phonological and semantic categories (Abstract)|
|10:00||David Poeppel: Linguistics and the future of the neurosciences (Abstract)|
|11:00||Thomas G. Bever: The universal and the individual in language (Abstract)|
|12:00||End of session|
The closest airport is Nürnberg. From Nürnberg Airport you can take the underground (U12) to Nürnberg main station (15 min) and from there catch a direct train to Bamberg (40 min).
Other convenient airports are Leipzig and Munich, from both airports take the local train (S-Bahn) to the main station and from there a direct train to Bamberg (the highspeed ICE train Berlin-Munich takes slightly more than two hours from both Leipzig and Munich).
Within Germany you can get to Bamberg by train. The ICE Berlin-Munich stops in Bamberg. The train station is in walking distance from the University.
Our workshop is part of the 30th annual meeting of the German Linguistic Society (DGfS): http://www.uni-bamberg.de/en/guk/tagungen/dgfs2008/. It is necessary to register for this conference if you want to attend our workshop.
We recommend to register for the conference by January 31st, after this date higher registration fees will be charged. You can register for the DGfS Annual Convention at http://www.uni-bamberg.de/en/anmeldung/dgfs2008.
You can book rooms at the following link: http://germany.nethotels.com/info/bamberg/events/dgfs2008. (German language link)
Prices range from 45,- Euro in an economic single room to 154,- Euro for a double room. The prices include breakfast. The two largest contingencies in the hotels of the Welcome chain run until December 20, 2007. Nearly all the rest will be available until January 25, 2008.
Youth hostel (beautifully situated): http://www.jugendherberge.de/jh/bamberg/ (German language link)
Language comparison presupposes comparability, and this in turn presupposes the common denominator of definitional universals. Definitions of the notion of language tend to be controversial. Definitions of the notion of human even more so. Nonetheless it is shown that it pays to look at the range of variation in both and at the corresponding options for drawing the line between empirical and a priori statements about humans and languages. The literature on these issues is briefly outlined and it is argued (a) that language is best conceptualized in terms of its function, and not in terms of the means used for fulfilling it, and (b) that the core function of language consists in affording its users general purpose unbounded sharing of mental activities.
Based on this assumption it is possible to narrow down in the space of human universals the range of possible subspaces occupied by the specifically linguistic universals. Since the former concern both the human body with its brain and mind, and the cultures and societies it lives in, all these factors must be suspected to contribute constraints on language diversity (and uniformity), and there is no a priori reason for excluding any one of them. Given the extremely small intra-species variance of the genetic endowment of humans and seemingly great diversity of their cultures it is tempting to underestimate the role of nonbiological forces. But considering the fact that part of human biology is also the faculty of cultural transmission, the possibility that the domain specific part of the basis for language acquisition is rather narrowly confined cannot be ruled out from the outset. Since the biological disposition for language undeniably requires cultural triggering in order to be activated, it is hard to dismiss the idea that partial triggering may result in partial activation of this disposition.
There is an increasing amount of converging evidence that our species is unique in its double determination by the transmission of both genes and cultural patterns. It is submitted that this specific mixture (idiosyncrasy in the literal sense) should be taken both as an important starting point and as a promising research objective for contemporary linguistics.
Human societies are remarkably diverse but this diversity is not limitless. Human cultural and linguistic variation is patterned and the spectrum of variation is not as wide as the “ethnographic hyperspace” we could think of. There are phenomena regularly found in all human cultures. Among the better known examples out of hundreds of universals are ethnocentrism, incest avoidance and social reciprocity.
Universals are ubiquitous on the level of cultures, not individuals. Universals can be demonstrated empirically by cross-cultural comparison and historical diachronic comparison. This could be supplemented by cross-species comparisons. The argument is that universals are related to human nature but are neither isomorphic with (a) human nature, (b) species traits, (c) psychic unity nor with (d) anthropological constants as conceived in philosophy.
As regards causes, universals are partially based (1) directly in human biology or reflect our evolved nature indirectly. Thus, specific aspects of human nature can be carefully inferred from specific universals. Beyond that there are additional factors all too often overlooked. Some universals are due to (2) global cultural transfer resp. diffusion, (3) systemic effects of patterned social relations, or (4) emergent interaction effects in “ultrasocial” systems. Others are related (5) to the simple fact of living in a material world. Thus, lists of traits found ubiquitously throughout human cultures cannot, pace e.g. Steven Pinker, simply be equated with human genes or human nature. In sum the paper tries to show that a precise concept and empirical knowledge about human universals is needed for an understanding of language universals.
Beyond anthropological questions and linguistic issues, cultural universals are currently relevant within a world characterized by a globalized obsession with culture as difference.
Antweiler, Christoph 2007: Was ist den Menschen gemeinsam? Über Kultur und Kulturen. Darmstadt: WBG Wissenschaftliche Buchgesellschaft
Brown, Donald Edward 2000: Human Universals and Their Implications. In: Neil Roughley (ed.): Being Humans. Anthropological Universality and Particularity in Transdisciplinary Perspectives. Berlin and New York: Walter de Gruyter: 156-174
Buller, David J. 2006: Adapting Minds. Evolutionary Psychology and the Persistent Quest For Human Nature. Cambridge, Mass.: The MIT Press (A Bradford Book)
Holenstein, Elmar 1979: Zur Begrifflichkeit der Universalienforschung in Linguistik und Anthropologie (Akup. Arbeiten des Kölner Universalien-Projekts, 35)
Holenstein, Elmar 1995: Human Equality and Intra- as well as Intercultural Diversity. The Monist. An International Quarterly Journal of General Philosophical Inquiry 78(1): 65- 75
For the last few tens of thousands of years--perhaps longer--humans have lived in tribal scale cooperative units. Language is both made possible by tribal cooperation and a potent means of organizing this cooperation. The janus face of intratribal cooperation is intertribal conflict. Usually intertribal conflict is between neighbors with different languages or different dialects. In the last 5,000 years the growth of complex societies has bound up tribal scale formations into large social systems composed of a diverse array of constituent organizations. Language still reflects patterns of conflict and cooperation as dialects, jargons, and ethnic divisions of labor reflect patterns cooperation, competition, and conflict within complex societies.
Competition among lexical candidates is a central process in spoken word recognition and is modulated by language-specific properties such as metrical structure and phonotactics. Previous research, however, added the language-universal Possible-Word Constraint (PWC) (Norris et al., 1997) to these properties. According to this account, lexical parses including non-syllabic sequences such as single consonants are disfavoured in the competition. Single consonants, because they do not constitute syllables, are treated as non-viable residues. This applies in a universal manner: Differing phonological constraints on words across languages are not relevant, as confirmed in studies on English, Sesotho, Japanese, and Dutch.
Using the word-spotting paradigm (Cutler & Norris, 1988), we tested the predictions of the PWC in Slovak and German. In Slovak, unlike German, single consonants such as /g/ and /f/ (allophones of k ‘to’ and v ‘in’) are words, but cannot be syllable peaks. We examined whether Slovak and German listeners follow the universal principle and thus have difficulty recognizing words in consonant contexts.
German listeners were faster at spotting words when preceded by syllables (e.g., Rose in suckrose) as compared to consonants (krose). Slovak listeners, however, detected words faster in an ungrammatical prepositional context (e.g., ruka ‘hand’ in gruka) than in syllabic and non-prepositional contexts (dugruka, truka). Two lexical-decision experiments revealed that the word-spotting effects could not be attributed to different acoustic realizations of the targets over conditions. A further Slovak experiment controlling for stress variations revealed faster responses in the syllable context than in the non-prepositional context, but the prepositional condition remained the fastest one.
These results suggest that the simple form of the PWC cannot be maintained. Single consonants are viable residues, but only if they are meaningful units in a given language. While the PWC should not be abandoned, it needs to be modified to incorporate languagespecific patterns.
Norris, D., McQueen, J. M., Cutler, A., & Butterfield, S. (1997). The possible-word constraint in the segmentation of continuous speech. Cognitive Psychology, 34, 191-243.
Cutler, A., & Norris, D. (1988). The role of strong syllables in segmentation for lexical access. Journal of Experimental Psychology: Human Perception and Performance, 14, 113- 121.
In the study of language there is often a focus on revealing “the” mechanism for a given function, or on discovering how such a mechanism develops, how it evolved, or where it is located in the brain. For example, controversies are usually centered on asking whether language, or some aspect of language such as syntax or the lexicon, are computed or represented or learned in this or that way. That is, the debate is usually framed in terms of mutually exclusive competing hypotheses.
However, many problems are solved in more than one way. If we want to write, we can use a pencil or a pen or a computer. If we want to warm up we can put on more clothes, make a fire, or eat a bowl of hot soup. Crucially, such variability and redundancy in underlying mechanisms are also widespread in biological systems. Biological solutions for thermal regulation include fur, fat, size (larger animals lose heat more slowly), evaporation (a panting dog, a sweating athlete), various metabolic processes, and even anti-freeze molecules.
Although variability and redundancy can be costly (it takes more energy to make both fur and fat), it also confers important advantages. Different mechanisms are likely to have different and complementary characteristics. Thus the ability to depend on more than one system can lead to a variety of functional advantages, due to the complementary advantages of each system.
Using various methods that can reveal underlying computational and biological mechanisms, research has begun to demonstrate the presence of redundant mechanisms in the neurocognition of language, as well as variability across individuals and populations with respect to their relative dependence on each mechanism.
Here we will focus on one aspect of language in which such redundancy has been observed. A crucial issue in the study of language is how rule-governed complex forms such as walked or the cat are learned, represented and computed. Although many explanatory accounts have been proposed, a basic theoretical debate contrasts two apparently mutually exclusive perspectives: On one side are those who argue that such complex forms are put together from their memorized parts (walk, -ed, the, cat) by a rule-governed grammatical system (or set of systems) that is distinct from the mental lexicon. On the other side are those who argue that complex forms are essentially no different from lexicalized forms like walk and cat, and that all forms are learned and processed by the same neurocognitive or computational mechanisms.
What do the data say? On the one hand, the data suggest that there are indeed (at least) two distinct systems, a lexical system for memorizing pieces of information, and another system (or set of systems) that underlies rule-governed composition. However, the existence of the latter system does not require that it necessarily underlie all rule-governed forms in all cases. Rather, accumulating evidence suggests that while complex forms can be and in fact often are computed by this system, they can also or instead be stored and processed in the mental lexicon, consistent with at least some aspects of single-mechanism claims.
Moreover, the data suggest that the relative reliance on each system varies across individuals, at least partly with the degree of (dis)functionality of aspects of one or the other system. For example, women, who have an advantage (possibly estrogen-modulated) at remembering verbal material, are more likely than men to retrieve complex forms, and correspondingly less likely to compose forms in the grammatical system. Other factors that modulate the relative reliance of complex forms on each system appear to included handedness, age of acquisition and bilingualism, genetic factors, and various disorders, including Specific Language Impairment. Theoretical, empirical, educational and clinical implications will be discussed.
“One major difficulty with the Whorfian Hypothesis is that is not immediately clear what sort of evidence would tend to disprove it conclusively” (Houston, 1972: 196). Scientific endeavours to find a remedy for this nuisance follow mainly two lines of reasoning. One is the cross-linguistic comparison of colour perception (Brown & Lenneberg, 1954, Winawer et al. 2007) and of other categories (Özcaliskan & Slobin, 2003). The second and more recent paradigm is the so called “thinking for speaking-approach” (Slobin 1996, Stutterheim & Nüse, 2003). They argue that, for instance, perspective taking in discourse planning be biased by language specific means for temporal or spatial references. According to this line of theorizing, one might further assume that lower and more automated procedures of utterance production might also be linguistically biased.
Our contribution focuses on the time course of operations at the interface between two levels of processing, the semantic and the syntactic one. We will present results from a cross linguistic experiment in utterance production. Speakers of Chinese, German and Polish were prompted to produce Yes/No-questions, a type of speech act that is syntactically coded differently in the three languages. Using a two choice Go-NoGo Paradigm with LRP measures, we tested the chronological correspondence of conceptualization and the syntactic coding processes. The results are at variance with the assumption of linguistic relativity at this level of language processing.
Brown, Roger, and Eric Lenneberg. "A Study in Language and Cognition." Journal of Abnormal and Social Psychology 49 (1954): 454-62.
Houston, Susan H. A Survey of Psycholinguistics. Edited by C.H. Van Schooneveld. Vol. 98, Janua Linguarum, Series Minor. The Hague: Mouton, 1972.
Özcaliskan, Seyda, and Dan I. Slobin. "Codability Effects on the Expression of Manner of Motion in Turkish and English." In Studies in Turkish Linguistics, edited by A.S. Öszoy, D. Akar, M. Nakipoglu-Demiralp, E. Erguvanli-Taylan and Ayhan A. Aksu- Koc, 259-70. Istanbul: Bogazici University Press, 2003.
Slobin, Dan I. "From ‘Thought and Language’ to ‘Thinking for Speaking’." In Rethinking Linguistic Relativity, edited by John Gumperz, J. and Stephan C. Levinson, 70-96. Cambridge: Cambridge University Press, 1996.
von Stutterheim, Christiane, and Ralf Nüse. "Processes of Conceptualization in Language Production: Language-Specific Perspectives and Event Construal." Linguistics 41, no. 5 (2003): 851-81.
Winawer, Jonathan, Nathan Wiffhoft, Michael C. Frank, Lisa Wu, Alex Wade, R., and Lera Boroditsky. "Russian Blues Reveal Effects of Language on Color Discriminiation." PNAS 104, no. 19 (2007): 7780-85.
For many years it has been thought that the essence of a grammatical theory was an account of the 'discrete infinity' of human language. But what if languages, some or all, are finite? What if a language had both a finite set of sentences and a maximum sentence size? Are such languages possible? What would this mean? And if there are such languages, is there a plausible cultural explanation for this finitude? How much grammar does a human language need? In this talk I suggest that some languages are finite and that there are plausible cultural reasons for this. If this is correct, then some formal theories of grammar have long been concerned about explaining the wrong things.
Languages seem to accomplish design tradeoffs in different ways: for example, some populate their lexicon by large segmental inventories and short words, while others use small segmental inventories and rich syllable structures/long words, or perhaps recruit suprasegmental tone; similarly, some languages use case-marking to mark grammatical relations while others use positional order to mark grammatical relations; both of these tradeoffs strike a balance between amount of stored material and amount of combinatorial constraints. Despite this design flexibility in the division of labor between data structures and types of operations on them, within each grammatical system, we find invariant properties, such as asymmetric treatment of binary phonological oppositions (markedness), constraints on morpheme combination that reflect semantic compositionality, and effects of predicate interpretation on aspectual and tense marking. To a large extent, the argument that recurrent crosslinguistic patterns merely reflect historical pathways and sampling error can be waylaid by artificial grammar experiments, in which any effects of historical residue are completely controlled by the experimenter. As Karl Verner remarked, "linguistics cannot totally rule out accident, but accidents en masse it cannot and must not countenance", and we hope that this talk may launch fruitful discussion at the workshop about whether attempts to explain invariant grammatical properties purely in terms of culture, working memory, or admixtures thereof yield countenanceable accidents.
Language has “duality of patterning”, a property by which Hockett (1958: 577) meant that “the smallest meaning units” are “composed of arrangements of meaningless but differentiative features”. This explicit and intended meaning —see Hockett 1958: 574-578; 1977: 172), although perhaps obscured by some confusing terminology on his part, has recently been distorted by different renowned scholars (Hurford 2002: 319, Pinker 2003: 32, Anderson 2004: 30, Studdert-Kennedy 2005: 50, Pinker&Jackendoff 2005: 212, Burling 2005: 33-34, 139, 247, etc.) who are all claiming that duality of patterning (DP) means that language has phonology and syntax.
Once reinstated in its original sense, DP appears to deserve a place in the Faculty of Language in the Narrow sense (FLN) (Hauser, Chomsky & Fitch 2002 (HCF)), this portion of the faculty of language that, being uniquely human and unique to language, would essentially consist of syntactic recursion according to HCF. Moreover, although at the basis of both DP and recursion there is a combinatorial mechanism that relates mental representations of sound (or its equivalent in sign languages) and meaning, it could be that finally DP was in FLN and recursion wasn’t. This would happen if recursion was demonstrated to be domain-general in humans and domain-specific (and unrelated to communication) in other animals, a possibility entertained at the very end of HCF. In this connection, it has to be noted that recursion, but not DP, seems to be present in music and arithmetic. Thus, whether recursion is included in FLN, and music and arithmetic are derivative from language or recursion is domain-general and music and arithmetic are, along with language, its manifestations, DP would be in either case in FLN whereas recursion would be included in it only if the first possibility is the correct one.
Anderson, S.R. (2004). Doctor Dolittle’s Delusion. Animals and the Uniqueness of Human Language. New Haven & London: Yale University Press.
Burling, Robins (2005). The talking ape. How language evolved. Oxford: Oxford University Press.
Hauser, M.D., N. Chomsky & W. Tecumseh Fitch (2002). The faculty of language: What is it, who has it, and how did it evolve? Science 298, 1569-1579
Hockett, C.F. (1958). A Course in Modern Linguistics. New York: MacMillan. (The references to the revised 1962 edition are made through the Spanish edition Curso de lingüística moderna. Buenos Aires, Eudeba, 1971).
Hockett, C.F. (1977), The View from Language. Selected Essays 1948-1974. Athens: The University of Georgia Press, 163-186.
Hurford, J.R. (2002b). The roles of expression and representattion in language evolution. In Alison Wray (ed.), The Transition to Language. Oxford: Oxford University Press, 311-334.
Pinker, S. (2003). Language as an adaptation to the cognitive niche. In Morten H. Christiansen & Simon Kirby (eds.), Language Evolution. Oxford: Oxford University Press., 6-37.
Pinker, S. & R. Jackendoff (2005). The faculty of language: what’s special about it? Cognition 95, 201-236.
Studdert-Kennedy, M. (2005). How did language go discrete? In Maggie Tallerman (ed.), Language Origins: Perspectives on Evolution. Oxford, New York: Oxford University Press, 48-67.
“The creation of a new neural pathway in no way entails the extinction of the previous one – …. [there is] persistence of the older link.” (Bickerton 1998, 353)
Modern syntactic theory, including Chomsky’s (1995) Minimalism and its predecessors (e.g. Stowell 1981, 1983, Burzio 1981, Kitagawa 1986, Koopman & Sportiche 1991, Hale & Keyser 2002), analyzes almost every clause/sentence, crosslinguistically, as underlyingly a small clause (1a), which gets transformed into a full/finite clause/sentence (a TP or Tense Phrase), upon subsequent Merge of Tense (b) and Move of the subject into TP (c):
(1) a. [SC Peter retire] → b. [TP will [Peter retire]]] → c. [TP Peter [T’ will [Peter retire]]]] Relative to TPs (c), which have (at least) two layers of clausal structure and even two subject positions (both of which can be filled, as will be shown), small clauses (a) can be seen as ‘half-clauses:’ they involve only one layer of clausal structure and one subject position. While languages and analyses vary with respect to what type, or how many, functional projections project on top of the small clause, most would agree that the small clause core is a universal property. Why is this so? Why should every sentence, in every language, be built upon the foundation of the small clause? One can certainly envision a more economical, more direct, derivation, which does not first create a small clause, only to later distort it/transform it into a TP, through a process which moreover requires Move, as pointed out in e.g. Parker (2006).
Small clauses comparable to (1a) are also found crosslinguistically in some marginal but productive root constructions, which defy the principles of modern syntax, and which are thus typically not analyzed in syntax (but see Akmajian 1984 and Progovac 2006, 2007).
(2) Peter retire?! Him retire?! Me worry?! Me first! Family first! Problem solved. The existence of such quirky clauses and the universal unfolding of sentence structure from the underlying small clause, begin to make sense if both are seen as by-products of evolutionary tinkering. My proposal is that small clauses with a single layer of clausal structure, comparable in form (but not always in meaning) to (2), characterized a protosyntactic stage in the evolution of language, with TP representing a later innovation (see e.g. Jackendoff 1999, 2002, Pinker & Bloom 1990, Deutscher 2005, Progovac 2007, for gradual evolution of syntax). Examples from other languages will show that the small clause construct is already able to express, on its own, the basic clausal properties, including predication, assertion, topic-comment, some agreement, and some temporal/aspectual properties.
In this scenario, TP would not have arisen from scratch, designed in an optimal way (e.g. Chomsky 2005), but rather it would have been superimposed upon what was already there: the small clause, leading to quirks and complexities that syntax is (in)famous for. Even today, the building of the sentence (1) seems to be retracing/incorporating those evolutionary steps (Progovac 2007). Arguably, child language acquisition likewise proceeds from a small clause stage to a TP stage (e.g. Radford 1988, 1990, Lebeaux 1989, Ouhalla 1991, Platzak 1990), providing some corroborating evidence (see Rolfe 1996) (see also agrammatism). In addition, evolutionary explanations invoking layering and recency dominance can be found elsewhere, e.g. in symbolic reference (Deacon 1997); in the superimposition of timed speech over ancient prosody (Deacon 1997; also Pulvermüller 2002, Piattelli-Palmarini & Uriagereka 2004); in brain stratification accounts (in Vygotsky’s and Piaget’s work, as well as in the triune brain proposals, e.g. Isaacson 1982, MacLean 1949). The common theme in all is the inclusion of attainments of earlier stages in the structures of later stages.
An epiphenomenon is a phenomenon that (i) is for real, but (ii) follows from another, more primitive phenomenon. We argue this is true for recursion. The linguistic phenomenon we describe as recursion really is the result of categorization, and more specifically the phases of a derivation. As you complete one phase, you cannot but begin the next, giving rise to an infinite progression of the form (P (P (P…(P)))…), where every P is a phase head. There are basic informational layers of the clause, which provide additional structuring, and there is an apparent universal constraint which determiners the proper linear sequence of heads within a phase (Cinque 1999). Wherever these further constraints, resulting in a partial ordering of heads, come from, they have nothing to do with recursion.
Recursion, then, is an overt effect that follows from phasing, and phasing amounts to categorization of a particular type. This type we associate with the notion of referentiality. We capture referentiality as the maximalization of a given description. Any cognitive act of describing the world has to eventually result in an act of reference, when the description becomes sufficient to determine a referent. A phase is the completion of an act of reference.
Note in support that recursion in language is restricted in various ways. Not everything embeds in everything. In particular, embedding of a clause into a clause does not allow for unlimited recursion (*[[If John comes on time [if Mary wakes him up [if her alarm clock rings]]], he will get an award.]). Other patterns of embedding allow for unlimited recursion ([DPthe possibility [CPthat John believes [DPthe claim [CPthat Mary regrets [DPthe fact [CPthat she was late]]]]]], [CPJohn has [vPdenied [CPthat Mary has ever [vPsaid [CPthat he has been drinking]]]]]]). However, embedding of an element in another element of the same category is here necessarily mediated by an interleaving element of a different category. This signals that the algorithm behind the generation of these structures may not really be recursive. Note that the ‘mediating’ category is usually rich enough to present a barrier (a phase boundary). This seems to point in an unexpected direction: recursion, though present at the descriptive level and cognitively real, is actually banned at the generative level. If language is not recursive, and in addition we capture phases as the completion of an act of reference, we predict that reference (and truth, a species of reference) does not embed. And indeed it doesn’t. Any complex noun phrase only has one overall referent; and any complex sentence only has one truth-value: it is evaluated for purposes of discourse integration only at the root.
Following current minimalist reasoning the study of natural language is guided by the Strongest Minimalist Thesis. Under such a view language is a perfect solution to the task of relating sound and meaning (Lasnik 2002; Chomsky 2005). However, if language is an optimal solution to conditions imposed by interface conditions, why does natural language exhibit the wide range of variation that is attested?
In my paper I argue that the source of cross-linguistic variation is not constituted by a number of innate parameters, part of the genetic endowment (factor I), but that it follows from the fact that different mental components impose different economy conditions on the Faculty of Language (FL) (factor III).
Following minimalist assumptions on grammatical architecture, it is the Conceptual- Intentional system(s), the Sensor-Motor system(s) and the Lexicon, an instance of memory, that impose conditions on FL. Since these mental systems are autonomous, their requirements on FL, i.e. the conditions they impose at the respective interfaces, do not necessarily have to be compatible. In fact, full compatibility would be a completely unexpected option. A much more natural assumption is that several of the conditions that the different mental systems impose on FL are in conflict. Now one must distinguish two kinds of conditions: hard conditions, which may not be violated (e.g. Full Interpretability) and weaker conditions, such as economy principles (like the Merge over Move constraint). This means that each grammar has to obey all hard conditions, but if two economy conditions are in conflict it may choose which economy condition is going to be satisfied most.
The existence of conflicting interface conditions, a fact that immediately follows from the modularity of grammar, already predicts the existence of grammatical variation, i.e. it creates a parametric space. This means that parameters are pregiven by the grammatical architecture and therefore do not have to be postulated as innate linguistic primitives.
Chomsky, N. (2005). "Three factors in language design." Linguistic Inquiry 36: 1-22.
Lasnik, H. (2002). "The Minimalist Program in syntax." Trends in Cognitive Sciences 6: 432- 437.
Various “poverty of the stimulus” arguments in linguistics aim to reveal the representational bias in the learner that makes language acquisition possible. At the same time, recent work in statistical learning highlights the human ability to detect statistical regularities in speech. While these perspectives are often placed in opposition to each other, I argue that statistical learning and abstract representations each require the other. Arguments for the necessity of particular representational content are silent about how experience is mapped onto the innate representational format; arguments about the learner’s sensitivity to statistical properties of speech presuppose a representational vocabulary. This paper provides several case studies illustrating how statistical learning works in concert with an innate representational vocabulary to drive language acquisition.
Are semantic categories determined primarily by universal principles (such as perceptual and cognitive predispositions), or are they relatively free to vary (depending on cultural, environmental and historical circumstances)? Despite the long history of this question, there is still little consensus on what the answer might be. Previously established semantic universals are regularly challenged (e.g., Roberson, Davies, and Davidoff 2000 versus Kay and Regier 2003 on color; Majid, Enfield, and van Staden 2006 versus Wierzbicka 2007 on the body), and the outcome has still to be determined.
Much of the previous debate has centered on relatively concrete domains, perhaps due to the implicit assumption that these are more likely to yield substantial universals (cf. Gentner, 1982). In this paper, I will draw on findings from two large-scale cross-linguistic projects based at the Max Planck Institute for Psycholinguistics, that demonstrate that there are non-trivial constraints in how relatively abstract entities – events – are semantically categorised across languages.
The projects examine how events of “cutting and breaking” (Majid and Bowerman 2007) and “reciprocals” (Evans, Gaby, Levinson, and Majid in prep) are expressed in words (verbs) and constructions. In both projects, the starting point is an etic grid of event types – a set of videoclips – which vary along a number of parameters. The clips are used to elicit speaker descriptions from a range of geographically, genetically and typologically diverse languages. The descriptions are then analyzed using multivariate statistics. These techniques extract recurrent categorisation strategies found across languages, as well as identifying unusual patterns. They also quantify how much structure is shared – if any.
The results suggest considerable uniformity in semantic categorisation across languages. For instance, for cutting and breaking events all languages recognize a dimension having to do with how predictable the location of separation in an entity will be. Categorization of reciprocal events shows more variation, with some quite different solutions to the problem of how to encode such events, but nevertheless recurrent semantic spaces emerge.
Evans, N., Gaby, A., Levinson, S. C. and Majid, A. (in prep). Reciprocals and Semantic Typology. Amsterdam: John Benjamins Series Typological Studies in Language.
Gentner, D. (1982). Why nouns are learned before verbs: Linguistic relativity versus natural partitioning. In S. A. Kuczaj (Ed.), Language development: Vol. 2. Language, thought and culture (pp. 301-334). Hillsdale, NJ: Lawrence Erlbaum Associates.
Kay, P., & Regier, T. (2003). Resolving the question of color naming universals. Proceedings of the National Academy of Sciences, 100, 9085-9089.
Roberson, D., Davies I., & Davidoff, J. (2000) Colour categories are not universal: Replications and new evidence from a Stone-age culture. Journal of Experimental Psychology: General, 129, 369-398.
Majid, A., & Bowerman, M. (2007). “Cutting and breaking” events: A cross-linguistic perspective. Special issue of Cognitive Linguistics, 18(2).
Majid, A., Enfield, N. J., & van Staden, M. (Eds.). (2006). Parts of the body: Cross-linguistic categorization [Special issue]. Language Sciences, 28, 137-359
Wierzbicka, A. (2007). Bodies and their parts: An NSM approach to semantic typology. Language Sciences, 29, 14-65.
The question of innateness and learnability will be addressed in the light of linguistic (and cognitive) phenomena and their possible neuronal basis in the human brain. Linguists have argued that discrete representations – of words, morphemes and grammatical features, are essential to the nature of language. Network approaches have argued instead that discrete representations do not develop in neural architectures. We will trace this proposition to specific types of neural networks and define network properties that do produce discrete representations [1,2]. In a similar manner, rules are denied by neural most approaches. Looking at neurophysiological evidence, which actually suggests discrete phenomena best described in terms of rules, we will argue for a possible brain basis of rules . Linking this to theory of neuronal ensembles and action-perception networks, a rule equivalent will be demonstrated in networks that mimic important features of relevant brain parts [4,5]. The features of neuronal hardware necessary for such discrete and rule-like representations will be defined, aiming at a statement about the neurobiological prerequisites of human language, especially syntax and the lexicon. Grammatical serial order mechanisms will be exemplified using centre-embedded sentences and pushdown storage, along with a neuronal architecture capable of processing strings of this type [6,7].
1 Wennekers, T. et al. (2006) Language models based on Hebbian cell assemblies. J Physiol Paris 100, 16-30
2 Garagnani, M. et al. (2007) A neuronal model of the language cortex. Neurocomputing 70, 1914-1919
3 Pulvermüller, F. and Assadollahi, R. (2007) Grammar or serial order?: Discrete combinatorial brain mechanisms reflected by the syntactic Mismatch Negativity. Journal of Cognitive Neuroscience 19 (6), 971-980
4 Pulvermüller, F. (2003) Sequence detectors as a basis of grammar in the brain. Theory in Biosciences 122, 87-103
5 Knoblauch, A. and Pulvermüller, F. (2005) Sequence detector networks and associative learning of grammatical categories. In Biomimetic neural learning for intelligent robots (Wermter, S. et al., eds.), pp. 31-53, Springer
6 Pulvermüller, F. (1993) On connecting syntax and the brain. In Brain theory - spatiotemporal aspects of brain function (Aertsen, A., ed.), pp. 131-145, Elsevier
7 Pulvermüller, F. (2003) The neuroscience of language, Cambridge University Press
Given the mismatch between the representational primitives of linguistics and neuroscience, how are these domains of inquiry supposed to be connected in an explanatory way? Jointly with my colleague David Embick I argue that a theoretically motivated, computationally explicit, and biologically plausible model of the neural basis of language requires linking hypotheses between the cognitive and biological ‘alphabets’ that are most plausibly stated in computational terms of a certain granularity. Building on the (typically discredited) ‘neophrenological’ approach associated with much of contemporary neuroimaging, I suggest a 'computational organology' approach, resuscitating those parts of Gall’s work that connect in a useful way with contemporary neurobiology.
A primary goal of linguistic research is to isolate and categorize universals of attested languages. This paper will illustrate distinctions between three major classes of linguistic universals: (a) formal/computational/architectural processes; (b) behavioral mechanisms for language use and learning; (c) neurological mechanisms for instantiation of (a) and (b).
The main point is that the different kinds of universals intermingle in surprising ways. Specific examples will show how the formal requirements can shape some of the behavioral mechanisms, how the behavioral mechanisms call on specific sorts of neurological structures, how the availability of certain kinds of neurological structures may constrain choices amongst formal computational alternatives.
Case studies will include the Extended Projection Principle as resulting from constraints on language usability and learnability; derivational relations between levels resulting from intrinsic motivational factors; upward movement/tree-building processes resulting from basic properties of neural structures used in species recognition.
These considerations call into question the implicit goal of modern minimalist „Biolinguistics“, in which different linguistic features are to be distinctly attributed to formal vs. „interface“ structures. Rather, language exhibits simultaneous social, individual, behavioral and neurological constraints, mutually selecting out interactions which result in coherent, effable and learnable languages.