Banner

Student Seminar Schedule

(Click here to access the faculty seminar schedule.)

Seminars are held on Tuesdays from 4:00 p.m. - 5:00 p.m. in Griffin-Floyd 100.

Refreshments will be provided!

Fall 2008

Date Speaker

Title (click for abstract)

Comments
Sep 16 Kiranmoy Das  
Sep 23 Demetris Athienitis  
Sep 30 Ruitao Liu  
Oct 7 Claudio Fuentes Griffin-Floyd 230
Oct 14 Dhiman Bhadra A brief Review of Penalized Splines and its Applications  
Oct 21 Nabanita Mukherjee  
Oct 30 (Thursday) Dr. Salvador Gezan  
Nov 18 Aixin Tan  
Nov 25 Dr. Arthur Berg  
Dec 2
Dr. Trevor Park
 

Abstracts


A Statistical Model for the Identification of Genes Governing the Incidence of Cancer with Age

Kiranmoy Das (Sep 16)

Cancer incidence increases with age. This epidemiological pattern of cancer incidence can be attributed to molecular and cellular processes for individual subjects. As an inherited disease, genes are thought to play a central role in shaping the incidence of cancer with age. In this article, we derived a dynamic statistical model for explaining the epidemiological pattern of cancer incidence based on individual genes that regulate cancer formation and progression. We incorporate the mathematical equations of age-specific cancer incidence into a framework for functional mapping aimed to identify quantitative trait loci (QTLs) for dynamic changes of a complex trait. The mathematical parameters that specify differences in the curve of cancer incidence among QTL genotypes are estimated within the context of maximum likelihood. We provide a series of testable quantitative hypotheses about the initiation and duration of genetic expression for QTLs involved in cancer progression. Computer simulation was used to examine the statistical behavior of the model. The new model proposed will provide a useful tool to detect genes for cancer progression and a general framework for explaining the epidemiological pattern of cancer incidence.
schedule


Introduction to Analysis of Time Series with Conditional Heteroscedasticity with an Example of GOOGLE log Asset Returns

Demetris Athienitis (Sep 23)

The reason for developing and studying GARCH models is that simple ARMA models have a considerable drawback in asset return volatility analysis and forecast in that they assume constant volatility. ARMA models are utilized to model conditional expectations given past values of the time series and are used in conjunction with GARCH models that build a model for volatility.
schedule


On Some New Contributions Towards Objective Priors

Ruitao Liu (Sep 30)

Bayesian methods are gaining increasing popularity in the theory and practice of statistics. One key component in any Bayesian inference is the selection of priors. Generally speaking, there are two classes of priors: subjective priors and objective priors. In this talk, I will give a selective review of objective priors and talk about some new contributions towards objective priors.
schedule


Testing for the Existence of Clusters with Applications to NIR Spectroscopy Data

Claudio Fuentes (Oct 7)

The detection and determination of clusters has been of special interest among researchers from different fields for a long time. Different efforts have been made in cluster analysis, but most of them determine the clusters depending on the distance between the observations. Although these methods have been proven to work well, they are usually too sensitive to the metric that defines the distance and they lack statistical procedures that facilitate decision making.

In this talk we explain a procedure that permits testing for clusters using Bayesian tools. Specifically we study the hypothesis test H0: k=1 vs. H1: k >1, were k denotes the number of clusters in a certain population. Finally, we present simulation studies that validate our conclusions, and we apply our method to NIR spectroscopy data coming from a genetic study in maize.
schedule


A brief Review of Penalized Splines and its Applications

Dhiman Bhadra (Oct 14)

Non-parametric regression analytic techniques are being increasingly used in diverse areas of Statistics with much success. This is because, these methods provide a powerful and flexible alternative to the usual parametric approaches in situations where the response-predictor relationship is too complicated to express in a known functional form. The gap between parametric and non-parametric ideologies is bridged by what are known as "Splines". In this talk, I will introduce different kinds of splines and then I will concentrate on Penalized splines. I will explain how they are used to "smooth-out" a series of error-prone measurements to reveal the true underlying pattern. I will explain some of the important properties of P-splines and will finish off with their applications to different areas of Statistics.
schedule


Summer Internship Experiences at Novartis Oncology

Nabanita Mukherjee (Oct 21)

I did my summer internship from Novartis which is a Pharmaceutical company. I worked on Simon's two stage design for the Phase II clinical study. To be able to work on this project, I needed and learned various clinical trial terminologies which includes "How are clinical trials conducted in general?"

In this talk, I will explain some of the basic terminologies related to clinical trials. I will give a outline of my project on Simon’s two stage design. At last I am going to share some other aspects of my experiences there and will try to give some informations about the process of application for an internship.
schedule


From Multiple Comparisons to False Discovery Rate

Dr. Salvador Gezan (Oct 30)

Within the scientific community there is always some debate about situations that require multiple testing, such as multiple comparisons (e.g. Tukey) and multiple testing with the same data set (e.g. testing of microarrays). At the present, there is an increase of the use, and report, of FDR and other corrections used to control for experiment-wise error but frequently there is not a clear understanding, among practitioners, of the ideas behind these tools. In this presentation we will review and discuss some of these issues.
schedule


Missing Data Problems in Public Health

Dr. Nan Laird (Nov 3)

Missing data arise commonly in many studies in Public Health and Medicine. Here I will review some cases where statistical methodology has contributed to our ability to overcome limitations in technology and sampling and produce good inferences with imperfect data. We will discuss examples from estimation of radon levels from diffusion battery data, longitudinal cohort studies, and testing for genetic effects with family data.
schedule


An introduction to the control and estimation of multiple testing error rates

Aixin Tan (Nov 18)

This talk gives a brief review of Multiple-hypothesis testing problems.
Multiple-hypothesis testing involves controlling much more complicated errors than single-hypothesis testing. Familywise error rate (FWER) is a wildly used measurement of error in multiple-hypothesis testing, while False discovery rate (FDR) is a relatively new and less strict measurement of error. Multiple testing procedures guarding against FDR are therefore expected to have more power than procedures that guard against FWER.
Traditional FDR controlling procedures were pioneered by Benjamini and Hochberg (1995, Journal of the Royal Statistical Society, Series B 57, 289-300). Their approach requires fixing a FDR control level and then estimating the rejection region. Alternatively, direct estimation of FDR was proposed by Storey (2002, Journal of the Royal Statistical Society, Series B 64, 479-498) as a substantially more powerful approach. This approach requires fixing a rejection region of interest and then (conservatively) estimating the associated FDR. So far, most theoretical results on the estimation of FDR were developed under Bayesian settings.
schedule


Resources for Statisticians

Dr. Arthur Berg (Nov 25)

In this talk I will describe some resources that I have found helpful both as a student and as a faculty member. These include: Beamer, Sweave, RWinEdt, Google Scholar, ArXiv, RGraphViz, UF Proxy, Clusters, HTML, GYM, Mediawiki, LimeSurvey, and last, but not least, Statistics (for statisticians). See you there!
schedule


Aspects of Principal Component Rotation

Dr. Trevor Park (Dec 2)

Principal component methodology is widely used for many purposes, including exploratory data analysis. However, principal component estimates are subject to high sampling variability, especially for high-dimensional data with low sample sizes. This variability can be reduced by using methods for simplifying the components, including methods based on dimension reduction, thresholding, and penalization. Some recent methodology will be described, with a special emphasis on likelihood-based methods. A few current research issues will be discussed.
schedule