Click here to (access) program structure and timetable.

October 24, 2013


Magna Jorgensen, Simula Research Laboratory, Norway

Click here for presentation.

Title: How often do we report results when there are none?

The statistical power of a study is a measure of how likely it is to
find a statistically significant effect, for a given effect size, if
there is any. Based on calculation of the typical statistical power
and effect sizes in empirical software engineering studies I calculate
the expected proportion of reported statistically significant results.
I compare this proportion with the actual proportion of significant
findings reported in software engineering journal. An excess of
reported statistically significant findings in software engineering
journal indicates that there are substantial problems with the
validity of the empirical research within our domain. I analyze and
discuss possible reasons for the validity problems and suggest changes
in research practices and paper reviewing guidelines that would lead
to a domain where one can have much more confidence in the correctness
of the reported results.

Session 1 (11:15-12:45) – Size Measurement I

11-Hassan Soubra. Fast Functional Size Measurement with Synchronous Languages

30-Gokcen Yilmaz, Seckin Tunalilar and Onur Demirors. Towards the Development of a Defect Detection Tool for COSMIC Functional Size Measurement

6-Laila Cheikhi and Alain Abran. PROMISE and ISBSG Software Engineering Data Repositories: A survey

Session 2A (14:00-15:30) – Size Measurement II

35-Frank Vogelezang, Charles Symons, Arlan Lesterhuis, Maya Daneva and Roberto Meli. Approximate COSMIC Functional Size

57-Andreas Schmietendorf, Anja Fiegler, Cornelius Wille, Reiner R. Dumke and Robert Neumann. COSMIC Functional Size Measurement of Cloud Systems

31-Feras Abutalib, Alain Abran and Dennis Giannacopoulos. Designing a Measurement Method for the Portability Non-Functional Requirement

55-Sylvie Trudel and Alain Abran. Measuring Software Reuse at the Requirements Level: A Case Study Using the COSMIC Method to Optimize Reusability

41-Luigi Buglione and Alain Abran. Improving the User Story Agile Technique Using the INVEST Criteria

23-Ahmet Ata Akca and Ayça Tarhan. Run-time measurement of COSMIC functional size for Java business applications: Is it worth the cost?

Session 2B (14:00-15:30) – Infrastructure & Process I

20-Matthias Vianden, Horst Lichter and Andreas Steffens. Towards a Maintainable Federalist Enterprise Measurement Infrastructure

26-Hajer Ayed, Naji Habra and Benoît Vanderose. AM-QuICk : a measurement-based framework for agile methods customisation

24-Kai Petersen and Cigdem Gencel. Worldviews, Research Methods, and their Relationship to Validity in Empirical Software Engineering Research

Session 3A (16:00-16:45) – Estimation

53-Rudolf Ramler and Michael Felderer. Experiences from an Initial Study on Risk Probability Estimation based on Expert Opinion

56-Sousuke Amasaki and Tomoyuki Yokogawa. The Effects of Variable Selection Methods on Linear Regression-based Effort Estimation Models

13-Pierre Erasmus and Maya Daneva. ERP Effort Estimation Based on Expert Judgments

Session 3B (16:00-17:00) – Infrastructure & Process II

2-Monica Villavicencio and Alain Abran. A Framework for Education in Software Measurement

51-Mehmet Söylemez and Ayça Tarhan. Using Process Enactment Data Analysis to Support Orthogonal Defect Classification for Software Process Improvement

29-José Antonio Pow-Sang, Daniela Villanueva, Luis Flores and Cristian Rusu. A Conversion Model and a Tool to Identify Function Point Logic Files using UML Analysis Class Diagrams

October 25, 2013

Session 4 (09:00-10:45) – Quality Evaluation I

10-Hennie Huijgens and Rini van Solingen. Measuring Best-in-Class Software Releases

28-Fatih Nayebi, Jean-Marc Desharnais and Alain Abran. An Expert-based Framework for Evaluating iOS Application Usability

25-Jia Tan, Cigdem Gencel and Kari Ronkko. A Framework for Developing Software Usability & User Experience Measurement Instruments in Mobile Industry

52-Reem Alnanih, Olga Ormandjieva and Thiruvengadam Radhakrishnan. A New Quality-in-Use Model for Mobile User Interfaces

Session 5 (11:15-12:45) – Quality Evaluation II

27-Rudolf Ramler and Johannes Himmelbauer. Noise in Bug Report Data and the Impact on Defect Prediction Results

12-Tosin Daniel Oyetoyan, Reidar Conradi and Daniela S. Cruzes. A Comparison of Different Defect Measures to Identify Defect-Prone Components

16-Miroslaw Staron. Measuring and Visualizing Code Stability – A Case Study at Three Companies

Session 6A (14:00-15:30) – Quality Evaluation III

3-Jan Vlietland and Hans Van Vliet. Visibility and Performance of IT Incident handling

42-Rakesh Rana, Miroslaw Staron, Christian Berger, Jorgen Hansson, Martin Nilsson and Fredrik Törner. Comparing between Maximum Likelihood Estimator and Non-Linear Regression estimation procedures for Software Reliability Growth Modelling

39-Shinya Ikemoto, Tadashi Dohi and Hiroyuki Okamura. Estimating Software Reliability with Static Project Data in Incremental Development Processes

46-Duygu Albayrak and Kürşat Çağıltay. Analyzing Turkish E- Government Websites by Eye Tracking

58-Kadriye Ozbas-Caglayan and Ali Hikmet Dogru. Software Repository Analysis for Investigating Design-Code Compliance

Session 6B (14:00-15:30) – SMA (Special Session)

37-Piotr Carewicz, Jarek Swierczek, Using COSMIC method with system analysis artifacts based on object-oriented approach and UML notation

49-Eric van der Vliet, Jacob Brunekreef, Paul Siemons and Rene Stavorius, Introduction of the Basis of Measurements

50-Ton Dekkers Software Estimation – The next level

Session 7A (16:00-17:00) – MEFPI (Special Session)

54-Burak Keser, Baris Ozkan and Taylan İyidoğan ASSIST: An Integrated Measurement Tool

43-Esra Şahin, İlgi Keskin and Ülkü Şencan A Pilot Study: Opportunities for Improving Software Quality via Application of CMMI Measurement and Analysis

32-Kaan Kurtel Measuring Software Maintenance: An Industrial Experience

47-Güven Özen, N. Alpay Karagöz, Oumout Chouseinoglou and Semih Bilgen Assessing Organizational Learning in IT Organizations: An experience report from industry

Session 7B (16:00-17:30) – SMA (Special Session)

36-Cigdem Gencel and Charles Symons, How to improve project effort prediction from Functional Size measurement data

38-Michal Gadomski Jarek Swierczek, System analysis convention with UML notation as basis for COSMIC automation in CASE Tool


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s