Home News Locations Careers Partners & Links

Scientific Consulting


Up                 

Go to NEWS for updates ...

 

 

 

About Scientific Consulting

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Balancing knowledge needs, scientific rigor, time lines, and budgetary realities - four challenges faced as much in  industry as in private/academic settings.

 

In Biopharma Research

Unfortunately, biopharma phase 4 and 5 studies are often criticized on methodology and credibility.  True, there have been "quick-and-dirty" studies: poorly designed, underpowered, ambiguous data models, inappropriate statistical analyses, questionable results reporting, and opportunistic communications.  Just as true, the majority of phase 4 and 5 studies have significant scientific merit - perhaps not always at par with registration studies, but certainly at par with most investigator-initiated and academic studies.

 

The issue is not whether Phase 4 and 5 studies are just as good as Phase 3 studies.  Instead, the issue is which approach is most appropriate for the strategic and clinical purpose(s).  Pre-registration and post-marketing studies are distinctly different activities along the scientific lifecycle of a compound.  Think, however, what you would not know if you only had the constrained data sets of controlled trials - but not the "real world" depth and breadth of data on practice patterns and outcomes.  Think, for a moment, how much you would be able to say about patients if all you had was knowledge from relatively homogeneous trial samples - but no access to large databases of patients within and across full ranges of disease states.  Think, finally, about the new indications you may not have discovered - and the "back-to-the-future" opportunity you would have missed: no real world data to launch new registration trials for supplemental registrations.

 

Biopharma studies fall in different classes - each with different levels of tolerance for design and data quality, yet all sharing a common quality threshold.  The task is to balance purpose with scientific rigor - yet always to design studies that will stand up to the scientific standards of tits class.  You will not hear us advocate (and certainly not see us do) poor research.  Instead, we work with our clients on finding the right scientific strategy and putting the markers in place for quality and tolerance.

 

This is the scientific consulting expertise you can expect from MATRIX45: solid science, pragmatically adapted to strategic purpose, time lines, and budgetary realities.  We may be able to recommend one particular scientific approach, or present you with different scenarios and their respective advantages and disadvantages.

 

In Private/Academic Research

The principles of scientific rigor cross all settings of healthcare research.  In their own way, academic/private researchers confront similar challenges of balancing knowledge needs, scientific merit, timelines, and budgets.  On the other hand, private/academic research is often different from biopharma research in intent: basic research, whether at the laboratory bench or in the clinical field, to find determinants, identify processes, examine models, or test new methods.

 

Here too MATRIX45 brings valuable expertise.  Some of us started in academic research, some of us are still involved in it.  We know firsthand the conceptual, methodological, and statistical challenges.  We too have had to balance the relevance (and at times urgency) of major scientific questions with realistic designs - and with funding constraints (see also our section on Development & Funding).

 

At the Interface

There is an added dimension in healthcare research that interfaces basic and applied research.  Conducting clinical biopharma research requires efficient and productive collaboration between the two sectors.  Biopharma needs top academic scientists who are at the forefront of both basic and applied research.  Academic/private researchers need the collaborative (and funding) mechanisms that enable them to answer new clinical questions.

 

Again, MATRIX45 staff bring experience in both arenas.  The research careers of some of us have encompassed the basic-to-applied continuum and the interface between sectors.  We'd like to believe it sets us apart.

Top of Page   Top of Section

 

 

Across Paradigms And Methodologies

 

 

In absolute terms the randomized controlled trial (RCT) is the gold standard for assessing the efficacy of a treatment in individuals - whether a drug treatment, a nonpharmacological intervention, or a healthcare delivery model.

 

Is the RCT the only valid model, i.e. the only model that will help us understand the antecedents, determinants, and outcomes of interventions as they exist "out there"?  In other words, do RCTs yield the necessary and sufficient knowledge to safely and effectively improve healthcare to patients, families, and communities?  What if RCTs are just not possible; either because too many confounds cannot be neutralized, ethical issues prevail, multilevel influences must be examined, randomization is impossible, or costs are plainly prohibitive?

 

Just to add another dimension ... The quality of any study is determined to a good extent by the quality of measurement of its endpoints or outcomes.  In the physiological arena, this may not be as much of a problem: while (surrogate) markers may not be perfect, variability in measurement is more often due to laboratory (and equipment) differences than "true" errors in measurement.

 

Things get more difficult when measuring behavioral, emotional, or cognitive attributes.  Just the mere plethora of measures for particular traits, attributes, or conditions (let's call them human characteristics) underscores the challenges of measurement.  Should we measure these characteristics as comprehensively and completely as possibly, while also being able to zoom in on particular dimensions (the case for large batteries)?  Should we instead focus on simple but accurate tools that screen well and give us a solid overall assessment (the case for quick, easy-to-use, yet reliable tools)?  Perhaps a cascading model is appropriate, where we start out with a top-level screening assessment and, based on embedded triggers, move on to more in-depth scales and batteries?  Or is there a step (or cascading level) in between, where screening tools provide more than top-level evaluation and enable clinical researchers to identify a patient's dimensions of behavior, emotion, and cognition that may require further assessment - without imposing the up-front burden of large psychometric batteries.

 

How do we at MATRIX45 think about these issues?

Without questioning the gold standard status of the RCT, we are convinced that a comprehensive and well-rounded lifecycle program of research may require the combination and integration of various paradigms and methodologies.  Indeed, multiple questions inevitably raise the probability of multiple methodologies - and choices will need to be made.

Further, by necessity RCTs constrain the variability at the "left side" of the equation.  The left side is clear cut, unambiguous, and under control: "each patient gets something versus nothing or something else."  RCTs also constrain the "right side" of the equation: the efficacy endpoints to be achieved within a limited time period, and potential safety issues during that same time period.

 

Consequently, many questions go unanswered even as the efficacy and safety of an intervention are being established.  Are the controlled conditions of the RCT replicable in day-to-day clinical practice?   Which patients benefit a lot, which less, and which not at all?  Where and how may variability in clinical practice occur?  What outcomes are associated with this variability?

 

Even more questions ... Can we detect new applications for this intervention: from new indications for a given drug to new patient populations for a given healthcare delivery model?  If clinical guidelines are available, how do actual practice patterns meet the treatment standards and targeted outcomes advocated by these guidelines?

 

Comprehensive and well-rounded research programs on the lifecycle of a clinical intervention (a drug, an integrated healthcare solution, a healthsystems delivery model, ...) will require the use of multiple paradigms and methodologies.  At MATRIX45, we are prepared to assist clients in the various challenges involved in this process: designing lifecycle research programs, with special emphasis on post-RCT studies; bringing together the various stakeholders in such programs - scientific, strategic, and clinical; and implementing and disseminating the knowledge generated by such lifecycle programs.  We are experienced in trials, epidemiological studies, measurement validation, and patient registries.  We have used both controlled and noncontrolled methods to examine causality.  We have looked back on data, and we have used them to project into the future.  At times, we even have abandoned "hard" quantitative methods to find answers that only "soft" qualitative methods might yield.

 

A note on the measurement of behavioral, emotional, and psychosocial attributes...  The field of psychodiagnostics is too broad, too diverse, and too deep for anyone to master (according to our academic psychodiagnostic experts).  This has forced us to make some choices at MATRIX45.  One, we would like to move beyond the use of relatively simple rating scales in clinical research, even if some have  attained "gold standard" status (a particular, convenient, not-too-demanding clinician rating scale for depression comes to mind).  Two, we want to fill the gap between top-level screens and "deep-down" batteries, in part by assessing the extent to which the "screens" may yield some intermediate-level insights in a patient's psychological capabilities.  Three, regardless of scope and depth, good research depends on good measurement of attributes that arte difficult to measure - not only in the psychiatric and neurological domain, but in health behavior in general.

 

Summarized, at MATRIX45 you will find the "toolboxes" for successful comprehensive research programs - in objectives, design, measurement, and analysis.  Importantly, you will also find the frameworks and rationales for knowing when, how, and for what purpose to use the tools in our toolboxes.

Top of Page   Top of Section

 

 

Types of Studies And Investigations

 

 

In our careers, we have design or conducted the following types of studies and investigations:

Controlled trials

Pharmaco-epidemiologic studies

Observational studies

Practice pattern analysis

Outcomes analysis

Practice <=> outcomes analysis

Conversion trials

Natural-entry trials

Registries

New indicators analysis

Pattern recognition and signal analysis

Instrument development and psychometric analysis

Causal modeling

Multilevel modeling

Evidence-based practice and outcomes improvement

Surveillance studies

Top of Page   Top of Section

 

 

Our Network of Experts

 

 

Our academic affiliations in North America and Europe, coupled with a network of clinical opinion leaders and methodological specialists, have enabled us to build a "virtual" expert staffing model that covers the full range of scientific methodologies and statistical models.  On the other hand, we also have the integrity to tell prospective clients when we do not have (and cannot find) the required expertise.

Top of Page   Top of Section

 

 

Case Studies

 

 

Drawing upon our principals' career-long scientific consulting work, the following are some (public domain) examples:

 

Data Integrity in Phase 4 & 5 Studies

Blockbuster Under Competitive Threat

Core Data Model for Phases 2 through 5

Assessing Multilevel Determinants of Treatment Outcomes

Differentiating Dimensions of Depression and Dementia in Older Adults

 

Top of Page

 

 

Data Integrity in Phase 4 & 5 Studies

 

"Should our [nonregistration] study have the same rigor of source document verification, monitoring and auditing as a registration trial?" - a question often asked in biopharma.  The answer is neither yes nor no.  Rigor is important, but needs to be balanced against such factors as purpose, intended use of data, allowable margin of error, to name a few.  Especially in large international pharmaco-epidemiologic and outcomes studies, the rigor of trials may not be necessary nor affordable.  Thus, we work with clients on identifying the required boundaries of methodology and data integrity, while acknowledging the inherent drawbacks of various adjustments.  We also work with clients on developing cost-efficient methods of assuring data integrity, from random (remote) audits to advanced statistical pattern detection.  Defensibility of the findings, internally or externally, is the guiding principle.

 

Return to list of examples

 

 

Blockbuster Under Competitive Threat

 

 

 

 

 

 

 

 

 

 

 

 

 

A blockbuster drug, an early biotechnology success story and in a few years’ time the standard of care, comes under competitive threat as similar agents gain approval and reimbursement.  To assert its scientific leadership, the manufacturer decides to sponsor the independent development of best practice guidelines and to launch a longitudinal survey.  It wants to establish a data base, while also grappling with the strategic and scientific questions that drive design, implementation, analysis, and dissemination.

 

Fast forward through a hectic 2-month pre-launch period and a 6-month data collection effort, and here is a 6-month longitudinal database on over 14,000 patients – and the survey expanding from one world region to a global initiative.  The data are analyzed, presented to key opinion leaders, and released for dissemination – along with the best practice guidelines – less than a year after the survey was designed.   The findings reveal a significant gap between best and “real world” practice, and documents the outcomes of various practice patterns.  Innovative medical education and clinician support strategies are developed in response, while extending the core survey concept into other approved or emerging indications.  All along, sales in the initial study region accelerate rapidly: clinicians are treating more patients, but are not necessarily treating them better.  Concurrent with surveys being launched across indications in major markets, global drug revenues for this drug increase by >275% over 4 years – from blockbuster to megabuster.

 

Return to list of examples

 

 

A Core Data Model For Phases 2-5

 

There is often a gap (if not chasm) between the data models of pre-registration studies and those of post-marketing studies.  Completing a promising phase 2 study on a potential next-to-market product with a better side-effect profile, the company sought to develop a continuous data model for phases 3 through 5 - able to accommodate both trial and non-trial studies.  Working with client staff and key opinion leaders, we defined the core data elements of primary endpoints, secondary endpoints, and outcome variables to be collected throughout the compound's lifecycle.  We also specified flexible extended data elements that can be added to study protocols and data projects on an "as-needed" basis.  The result is data continuity: in definition, measurement, and interpretation.  In turn, it enables integration of datasets within and across phases into meta-datasets.  Importantly, it assures consistency and continuity in interpreting results.

 

Return to list of examples

 

 

Assessing Multilevel Determinants of Treatment Outcomes

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Imagine a drug trial in which the efficacy of a new drug is established: a limited but statistically significant treatment response is observed in the treatment versus the placebo group.  However, there is some uneasiness about the findings: the "between-groups" results are significant, but there are placebo and treatment patients responding at similar levels of slight improvement ("the overlapping distributions" problem)

 

What might have caused the overlapping response distribution curves?  Of all possible patient assignments into subsamples, randomization might have generated subsamples with inherent (and often undetectable) biases.  Even before randomization, perhaps inclusion and exclusion criteria were not as tightly specified.  But then, would we want subsamples that are so contrived in their homogeneity that they fail to represent real-world patients (and cause excessively restrictive drug labeling)?  Endpoints may not have been operationalized effectively or measured accurately.  Data exploration may not have shed light on some distributional characteristics [etc. etc. etc.].

 

Let's examine the variability/overlap issue from another but complementary angle.  Numerous centers participated in the trial, recruited from many countries and several continents.  Might geography be a proxy for different cultures, approaches to healthcare, organization of healthcare delivery, and case mix?  Most patients were recruited from academic medical centers or affiliated hospitals, not community hospitals.  Perhaps several clinicians were involved in screening patients; was experience a factor?  Did all centers and clinicians comply 100% with the study protocol, or might there have been (known or undetected) deviations?  At the end of the "patient supply chain", who was responsible for recruiting patients, obtaining their informed consent, orienting them to the study, guiding them through the protocol, collecting their data over the duration of the study period, getting patients to answer sensitive questions (including about their compliance!), and transmitting patient data to the trial coordination center.

 

Clinical outcomes, in trials and in other studies, may be influenced by many factors.  Patient randomization should equalize confounds at the individual patient level.  However, beyond the patient, there are many other factors, at different levels of organization, that may influence outcomes.  At MATRIX45, we believe that in the future both controlled and noncontrolled studies will increasingly use multilevel analysis models to better understand the contextual determinants of clinical outcomes.

 

Return to list of examples

 

 

Differentiating Dimensions of Depression and Dementia in Older Adults

 

 

 

 

 

Depression and dementia are common clinical problems in outpatient and inpatient populations.  In a systematic program of research, we sought for ways to use common screening instruments to provide more than an overall assessment of mood or cognition.  We examined whether such instruments had embedded dimensions of clinical status that could give a more differentiated screening and guide next-step evaluation.  As screening instruments are often used in (definitive) clinical research, we were also interested in strengthening these instruments conceptually so that they could provide more than a yes/no screening answer and instead bring more differentiated assessment dimensions to the fore.

 

As the references here show, widely used scales for depression and dementia enable both clinicians and researchers to go beyond the "yes/no" screening question.  Embedded dimensions enable that intermediate step in the cascading process from screening to in-depth assessment by focusing in on key dimension.

 

Return to list of examples

 

 

Scientific Consulting Services

 

Study / project design using various paradigms and methodologies

Protocol development

Data model development, from single study to lifecycle models

Corporate database development

Measurement, instrumentation, scaling and (psycho)diagnostics

Statistical analysis planning

Advisory Board / Expert Panel / Key Opinion Leader Development and Support

 

Top of Page

 

 

 

E-mail Privacy

MATRIX45 wants to protect itself, its staff,  and its clients from unauthorized ("spam") e-mail. 

To avoid the unwanted gathering of e-mail addresses by robot agents ("web spyders", "web bots")

operated by or on behalf of "spammers", our web site does not specify e-mail addresses in the usual syntax of abc@yyy.xxx.

Our domain name is matrix45.com.  You may send e-mail to requests@, postmaster@, webmaster@, careers@, and partners@ at this domain name.

Thank you.

 

Disclaimer - Legal Notice - Privacy Policy - Credits
Copyright © 2003-2006 Matrix45 LLC.  All rights reserved.
Last modified: 03/25/06