journal article
LitStream Collection
Kling, Nathan D.; Browne, Lorne; Robinson, Earl J.
1985 Applied Stochastic Models and Data Analysis
A major property‐casualty insurance company had streamlined underwriting procedures and hoped to design a new commercial insurance package which would appeal to its independent agents. They hoped to do this by improving service times, premium and/or commission, but making these improvements would require the agents to fill out a new underwriting form. The research used conjoint analysis to determine for management the optimum levels of each of the factors to be employed in the new programme and the possible negative impact of the new underwriting form. Also discussed are issues relating to the use of conjoint analysis; in particular, the handling of large factorial designs and the aggregation of individual results.
1985 Applied Stochastic Models and Data Analysis
A centring of the parameters of a log‐linear model, based on the weighted harmonic mean of the exponential transforms of the parameters, gives results more consistent with the model than the traditional centring technique arbitrarily based—by analogy with the analysis of variance—on the average of the untransformed parameters. It permits the partitioning of the total information of the table in components of the information, each corresponding with a margin appearing in the constraints. An illustration is given at the end of the article concerning the labour force in the European Community.
1985 Applied Stochastic Models and Data Analysis
Principal components analysis and correspondence analysis may be generalized to handle time‐dependent data. For real‐valued data the basic tool is the Karhunen‐Loeve decomposition. For categorical processes harmonic analysis is presented in terms of reciprocal averaging. Details are given for approximated and interpolated solutions.
1985 Applied Stochastic Models and Data Analysis
After a short methodological presentation of similarity aggregation in automatic classification, we will present an application to computational linguistics. We will try to explain, from an existing dictionary of synonyms, how we have (a) defined what was, in our opinion, the meaning of the synonymous relation we wanted to reveal in a new optimized dictionary, (b) transformed the existing dictionary into a sequence of matrices of synonymy, (c) checked with an adapted algorithm (similarity aggregation technique) if the links appearing in the existing dictionary corresponded to our synonymy definition, (d) tried to improve the synonymous relation, in order to propose more accurate data facilitating the management of a new dictionary and providing a classification of synonyms according to a semic separate valuation.
Showing 1 to 7 of 7 Articles