The widely recognised multi-compositionality of linguistic complexity has led scholars to formulate several definitions of the construct according to the perspective taken on the matter in each field. The measurement of English complexity, in particular, seems to have long relied on a few indices, mainly lexical and morphosyntactic, which in some cases appear to have been preferred over other measures for their ease of computation rather than methodological effectiveness. The picture of complexity emerging from the present review is that of a construct that often still lacks a thorough and shared definition and operationalisation. Notwithstanding these issues, it is argued that investigating linguistic complexity is still important and its study becomes more tractable when 1) it is carefully discerned from cognitive complexity (i.e., difficulty), thus when the linguistic forms characterising a text and their functions are studied separately from the implications these have on cognitive processing; 2) resorting to evidence-based inductive approaches, which reduce the impact of the set of a priori assumptions inherited from traditional categories of linguistic analysis about how language works; 3) adopting a register-functional approach, which adds an explanatory dimension to the study of linguistic complexity by taking into account how the communicative purposes, the type of audience and the production circumstances of a text influence its complexity.
What happened to Complexity? A review of definitions, measurement and challenges
	
	
	
		
		
		
		
		
	
	
	
	
	
	
	
	
		
		
		
		
		
			
			
			
		
		
		
		
			
			
				
				
					
					
					
					
						
							
						
						
					
				
				
				
				
				
				
				
				
				
				
				
			
			
		
		
		
		
	
Liviana Galiano
			2025-01-01
Abstract
The widely recognised multi-compositionality of linguistic complexity has led scholars to formulate several definitions of the construct according to the perspective taken on the matter in each field. The measurement of English complexity, in particular, seems to have long relied on a few indices, mainly lexical and morphosyntactic, which in some cases appear to have been preferred over other measures for their ease of computation rather than methodological effectiveness. The picture of complexity emerging from the present review is that of a construct that often still lacks a thorough and shared definition and operationalisation. Notwithstanding these issues, it is argued that investigating linguistic complexity is still important and its study becomes more tractable when 1) it is carefully discerned from cognitive complexity (i.e., difficulty), thus when the linguistic forms characterising a text and their functions are studied separately from the implications these have on cognitive processing; 2) resorting to evidence-based inductive approaches, which reduce the impact of the set of a priori assumptions inherited from traditional categories of linguistic analysis about how language works; 3) adopting a register-functional approach, which adds an explanatory dimension to the study of linguistic complexity by taking into account how the communicative purposes, the type of audience and the production circumstances of a text influence its complexity.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


