Title: | Radiocarbon Dating, Age-Depth Modelling, Relative Sea Level Rate Estimation, and Non-Parametric Phase Modelling |
---|---|
Description: | Enables quick calibration of radiocarbon dates under various calibration curves (including user generated ones); age-depth modelling as per the algorithm of Haslett and Parnell (2008) <DOI:10.1111/j.1467-9876.2008.00623.x>; Relative sea level rate estimation incorporating time uncertainty in polynomial regression models (Parnell and Gehrels 2015) <DOI:10.1002/9781118452547.ch32>; non-parametric phase modelling via Gaussian mixtures as a means to determine the activity of a site (and as an alternative to the Oxcal function SUM; currently unpublished), and reverse calibration of dates from calibrated into un-calibrated years (also unpublished). |
Authors: | Andrew Parnell [cre, aut], Nathan McJames [ctb], Bruna Wundervald [ctb], Keefe Murphy [ctb], Mateus Maia [ctb], Amin Shoari Nejad [ctb], Yong Chen Goh [ctb] |
Maintainer: | Andrew Parnell <[email protected]> |
License: | GPL (>= 2) |
Version: | 4.7.6.9000 |
Built: | 2024-11-11 04:16:47 UTC |
Source: | https://github.com/andrewcparnell/bchron |
This package enables quick calibration of radiocarbon dates under various calibration curves (including user generated ones); Age-depth modelling as per the algorithm of Haslett and Parnell (2008); Relative sea level rate estimation incorporating time uncertainty in polynomial regression models; and non-parametric phase modelling via Gaussian mixtures as a means to determine the activity of a site (and as an alternative to the Oxcal function SUM)
The most important functions are BchronCalibrate
to calibrate radiocarbon (and non-radiocarbon) dates, Bchronology
for the age-depth model of Haslett and Parnell (2008), BchronRSL
to get rate estimates for relative sea level data, BchronDensity
and BchronDensityFast
for non-parametric phase modelling of age data. See the help files for these functions for examples. See the vignette for more complete documentation
A fast function for calibrating large numbers of radiocarbon dates involving multiple calibration curves
BchronCalibrate( ages, ageSds, calCurves = rep("intcal20", length(ages)), ids = NULL, positions = NULL, pathToCalCurves = system.file("data", package = "Bchron"), allowOutside = FALSE, eps = 1e-05, dfs = rep(100, length(ages)) )
BchronCalibrate( ages, ageSds, calCurves = rep("intcal20", length(ages)), ids = NULL, positions = NULL, pathToCalCurves = system.file("data", package = "Bchron"), allowOutside = FALSE, eps = 1e-05, dfs = rep(100, length(ages)) )
ages |
A vector of ages provided in years before 1950. |
ageSds |
A vector of 1-sigma values for the ages given above |
calCurves |
A vector of values containing either |
ids |
ID names for each age |
positions |
Position values (e.g. depths) for each age. In the case of layers of non-zero thickness, this should be the middle value of the slice |
pathToCalCurves |
File path to where the calibration curves are located. Defaults to the system directory where the 3 standard calibration curves are stored. |
allowOutside |
Whether to allow calibrations to run outside the range of the calibration curve. By default this is turned off as calibrations outside of the range of the calibration curve can cause severe issues with probability ranges of calibrated dates |
eps |
Cut-off point for density calculation. A value of eps>0 removes ages from the output which have negligible probability density |
dfs |
Degrees-of-freedom values for the t-distribution associated with the calibration calculation. A large value indicates Gaussian distributions assumed for the 14C ages |
This function provides a direct numerical integration strategy for computing calibrated radiocarbon ages. The steps for each 14C age are approximately as follows: 1) Create a grid of ages covering the range of the calibration curve 2) Calculate the probability of each age according to the 14C age, the standard deviation supplied and the calibration curve 3) Normalise the probabilities so that they sum to 1 4) Remove any probabilities that are less than the value given for eps Multiple calibration curves can be specified so that each 14C age can have a different curve. For ages that are not 14C, use the 'normal' calibration curve which treats the ages as normally distributed with given standard deviation
A list of lists where each element corresponds to a single age. Each element contains:
ages |
The original age supplied |
ageSds |
The original age standard deviation supplied |
positions |
The position of the age (usually the depth) |
calCurves |
The calibration curve used for that age |
ageGrid |
A grid of age values over which the density was created |
densities |
A vector of probability values indicating the probability value for each element in |
ageLab |
The label given to the age variable |
positionLab |
The label given to the position variable |
Bchronology
, BchronRSL
, BchronDensity
, BchronDensityFast
, createCalCurve
# Calibrate a single age ages1 <- BchronCalibrate( ages = 11553, ageSds = 230, calCurves = "intcal20", ids = "Date-1" ) summary(ages1) plot(ages1) # Or plot with Calibration curve plot(ages1, includeCal = TRUE) # Calibrate multiple ages with different calibration curves ages2 <- BchronCalibrate( ages = c(3445, 11553, 7456), ageSds = c(50, 230, 110), calCurves = c("intcal20", "intcal20", "shcal20") ) summary(ages2) plot(ages2) # Calibrate multiple ages with multiple calibration curves and including depth ages3 <- BchronCalibrate( ages = c(3445, 11553), ageSds = c(50, 230), positions = c(100, 150), calCurves = c("intcal20", "normal") ) summary(ages3) plot(ages3, withPositions = TRUE)
# Calibrate a single age ages1 <- BchronCalibrate( ages = 11553, ageSds = 230, calCurves = "intcal20", ids = "Date-1" ) summary(ages1) plot(ages1) # Or plot with Calibration curve plot(ages1, includeCal = TRUE) # Calibrate multiple ages with different calibration curves ages2 <- BchronCalibrate( ages = c(3445, 11553, 7456), ageSds = c(50, 230, 110), calCurves = c("intcal20", "intcal20", "shcal20") ) summary(ages2) plot(ages2) # Calibrate multiple ages with multiple calibration curves and including depth ages3 <- BchronCalibrate( ages = c(3445, 11553), ageSds = c(50, 230), positions = c(100, 150), calCurves = c("intcal20", "normal") ) summary(ages3) plot(ages3, withPositions = TRUE)
Function to be used for checking the data formats in BchronCalibrate
and Bchronology
. Mostly to be used internally to avoid Bchron running into problems with bad data specifications, but might also be useful for
BchronCheck( ages, ageSds, positions = NULL, pathToCalCurves = NULL, calCurves = NULL, positionThicknesses = NULL, ids = NULL, outlierProbs = NULL, predictPositions = NULL, artificialThickness = NULL, allowOutside = NULL, iterations = NULL, thetaStart = NULL, burn = NULL, thin = NULL, extractDate = NULL, maxExtrap = NULL, thetaMhSd = NULL, muMhSd = NULL, psiMhSd = NULL, ageScaleVal = NULL, positionEps = NULL, positionNormalise = NULL, eps = NULL, dfs = NULL, type = c("BchronCalibrate", "Bchronology") )
BchronCheck( ages, ageSds, positions = NULL, pathToCalCurves = NULL, calCurves = NULL, positionThicknesses = NULL, ids = NULL, outlierProbs = NULL, predictPositions = NULL, artificialThickness = NULL, allowOutside = NULL, iterations = NULL, thetaStart = NULL, burn = NULL, thin = NULL, extractDate = NULL, maxExtrap = NULL, thetaMhSd = NULL, muMhSd = NULL, psiMhSd = NULL, ageScaleVal = NULL, positionEps = NULL, positionNormalise = NULL, eps = NULL, dfs = NULL, type = c("BchronCalibrate", "Bchronology") )
ages |
A vector of ages provided in years before 1950. |
ageSds |
A vector of 1-sigma values for the ages given above |
positions |
Position values (e.g. depths) for each age. In the case of layers of non-zero thickness, this should be the middle value of the slice |
pathToCalCurves |
File path to where the calibration curves are located. Defaults to the system directory where the 3 standard calibration curves are stored. |
calCurves |
A vector of values containing either |
positionThicknesses |
Thickness values for each of the positions. The thickness value should be the full thickness value of the slice. By default set to zero. |
ids |
ID names for each age |
outlierProbs |
A vector of prior outlier probabilities, one for each age. Defaults to 0.01 |
predictPositions |
A vector of positions (e.g. depths) at which predicted age values are required. Defaults to a sequence of length 100 from the top position to the bottom position |
artificialThickness |
Amount to add to the thickness values in the case of equal positions with no |
allowOutside |
Whether to allow calibrations to run outside the range of the calibration curve. By default this is turned off as calibrations outside of the range of the calibration curve can cause severe issues with probability ranges of calibrated dates |
iterations |
The number of iterations to run the procedure for |
thetaStart |
A set of starting values for the calendar ages estimated by Bchron. If NULL uses a function to estimate the ages. These should be in the same units as the posterior ages required. See example below for usage. |
burn |
The number of starting iterations to discard |
thin |
The step size for every iteration to keep beyond the burn-in |
extractDate |
The top age of the core. Used for extrapolation purposes so that no extrapolated ages go beyond the top age of the core. Defaults to the current year |
maxExtrap |
The maximum number of extrapolations to perform before giving up and setting the predicted ages to NA. Useful for when large amounts of extrapolation are required, i.e. some of the |
thetaMhSd |
The Metropolis-Hastings standard deviation for the age parameters |
muMhSd |
The Metropolis-Hastings standard deviation for the Compound Poisson-Gamma mean |
psiMhSd |
The Metropolis-Hastings standard deviation for the Compound Poisson-Gamma scale |
ageScaleVal |
A scale value for the ages. |
positionEps |
A small value used to check whether simulated positions are far enough apart to avoid numerical underflow errors. If errors occur in model runs (e.g. |
positionNormalise |
Whether to normalise the position values. |
eps |
Cut-off point for density calculation. A value of eps>0 removes ages from the output which have negligible probability density |
dfs |
Degrees-of-freedom values for the t-distribution associated with the calibration calculation. A large value indicates Gaussian distributions assumed for the 14C ages |
type |
Whether this function has been called to check parameters for calibration purposes ( |
This function returns nothing other than a message.
data(Glendalough) # Check the Glendalough data are in the right format with( Glendalough, BchronCheck(ages, ageSds, position, pathToCalCurves = system.file("data", package = "Bchron"), calCurves, type = "BchronCalibrate" ) )
data(Glendalough) # Check the Glendalough data are in the right format with( Glendalough, BchronCheck(ages, ageSds, position, pathToCalCurves = system.file("data", package = "Bchron"), calCurves, type = "BchronCalibrate" ) )
This function runs a non-parametric phase model on 14C and non-14C ages via Gaussian Mixture density estimation
BchronDensity( ages, ageSds, calCurves, pathToCalCurves = system.file("data", package = "Bchron"), dfs = rep(100, length(ages)), numMix = 50, iterations = 10000, burn = 2000, thin = 8, updateAges = FALSE, store_density = TRUE )
BchronDensity( ages, ageSds, calCurves, pathToCalCurves = system.file("data", package = "Bchron"), dfs = rep(100, length(ages)), numMix = 50, iterations = 10000, burn = 2000, thin = 8, updateAges = FALSE, store_density = TRUE )
ages |
A vector of ages (most likely 14C) |
ageSds |
A vector of 1-sigma values for the ages given above |
calCurves |
A vector of values containing either |
pathToCalCurves |
File path to where the calibration curves are located. Defaults to the system directory where the 3 standard calibration curves are stored |
dfs |
Degrees-of-freedom values for the t-distribution associated with the calibration calculation. A large value indicates Gaussian distributions assumed for the 14C ages |
numMix |
The number of mixture components in the phase model. Might need to be increased if the data set is large and the phase behaviour is very complex |
iterations |
The number of iterations to run for |
burn |
The number of starting iterations to discard |
thin |
The step size of iterations to keep |
updateAges |
Whether or not to update ages as part of the MCMC run. Default is FALSE. Changing this to TRUE will improve performance but will fit a slightly invalid model |
store_density |
Whether or not to store the density and age grid. Useful for plotting the output in other packages |
This model places a Gaussian mixture prior distribution on the calibrated ages and so estimates the density of the overall set of radiocarbon ages. It is designed to be a probabilistic version of the Oxcal SUM command which takes calibrated ages and sums the probability distributions with the aim of estimating activity through age as a proxy.
An object of class BchronDensityRun
with the following elements:
thetaThe posterior samples of the restricted ages
pPosterior samples of the mixture proportions
muValues of the means of each Gaussian mixture
calAgesThe calibrated ages from BchronCalibrate
GThe number of mixture components. Equal to numMix
age_gridA grid of ages used for the final density estimate
densityThe density estimate based on the above age grid
Bchronology
, BchronRSL
, BchronDensityFast
for a faster approximate version of this function
# Read in some data from Sluggan Moss data(Sluggan) # Run the model SlugDens <- with( Sluggan, BchronDensity( ages = ages, ageSds = ageSds, calCurves = calCurves ) ) # plot it plot(SlugDens)
# Read in some data from Sluggan Moss data(Sluggan) # Run the model SlugDens <- with( Sluggan, BchronDensity( ages = ages, ageSds = ageSds, calCurves = calCurves ) ) # plot it plot(SlugDens)
This function runs a non-parametric phase model on 14C and non-14C ages via Gaussian Mixture density estimation through the mclust package
BchronDensityFast( ages, ageSds, calCurves, pathToCalCurves = system.file("data", package = "Bchron"), dfs = rep(100, length(ages)), samples = 2000, G = 30 )
BchronDensityFast( ages, ageSds, calCurves, pathToCalCurves = system.file("data", package = "Bchron"), dfs = rep(100, length(ages)), samples = 2000, G = 30 )
ages |
A vector of ages (most likely 14C) |
ageSds |
A vector of 1-sigma values for the ages given above |
calCurves |
A vector of values containing either |
pathToCalCurves |
File path to where the calibration curves are located. Defaults to the system directory where the 3 standard calibration curves are stored. |
dfs |
Degrees-of-freedom values for the t-distribution associated with the calibration calculation. A large value indicates Gaussian distributions assumed for the 14C ages |
samples |
Number of samples of calibrated dates required |
G |
Number of Gaussian mixture components |
This is a faster approximate version of BchronDensity
that uses the densityMclust
function to compute the Gaussian mixtures for a set of calibrated ages. The method is an approximation as it does not fit a fully Bayesian model as BchronDensity
does. It is designed to be a probabilistic version of the Oxcal SUM command which takes calibrated ages and sums the probability distributions with the aim of estimating activity through age as a proxy.
An object of class BchronDensityRunFast
with the following components:
out |
The output from the run of |
calAges |
The calibrated ages from the |
Bchronology
, BchronCalibrate
, BchronRSL
, BchronDensity
for a slower exact version of this function
# Read in some data from Sluggan Moss data(Sluggan) # Run the model SlugDensFast <- with( Sluggan, BchronDensityFast( ages = ages, ageSds = ageSds, calCurves = calCurves ) ) # plot it plot(SlugDensFast)
# Read in some data from Sluggan Moss data(Sluggan) # Run the model SlugDensFast <- with( Sluggan, BchronDensityFast( ages = ages, ageSds = ageSds, calCurves = calCurves ) ) # plot it plot(SlugDensFast)
Fits a non-parametric chronology model to age/position data according to the Compound Poisson-Gamma model defined by Haslett and Parnell (2008) <DOI:10.1111/j.1467-9876.2008.00623.x>. This version uses a slightly modified Markov chain Monte Carlo fitting algorithm which aims to converge quicker and requires fewer iterations. It also a slightly modified procedure for identifying outliers
Bchronology( ages, ageSds, positions, positionThicknesses = rep(0, length(ages)), calCurves = rep("intcal20", length(ages)), ids = NULL, outlierProbs = rep(0.01, length(ages)), predictPositions = seq(min(positions), max(positions), length = 100), pathToCalCurves = system.file("data", package = "Bchron"), artificialThickness = 0.01, allowOutside = FALSE, iterations = 10000, burn = 2000, thin = 8, extractDate = 1950 - as.numeric(format(Sys.time(), "%Y")), maxExtrap = 1000, thetaStart = NULL, thetaMhSd = 0.5, muMhSd = 0.1, psiMhSd = 0.1, ageScaleVal = 1000, positionEps = 1e-05, positionNormalise = TRUE )
Bchronology( ages, ageSds, positions, positionThicknesses = rep(0, length(ages)), calCurves = rep("intcal20", length(ages)), ids = NULL, outlierProbs = rep(0.01, length(ages)), predictPositions = seq(min(positions), max(positions), length = 100), pathToCalCurves = system.file("data", package = "Bchron"), artificialThickness = 0.01, allowOutside = FALSE, iterations = 10000, burn = 2000, thin = 8, extractDate = 1950 - as.numeric(format(Sys.time(), "%Y")), maxExtrap = 1000, thetaStart = NULL, thetaMhSd = 0.5, muMhSd = 0.1, psiMhSd = 0.1, ageScaleVal = 1000, positionEps = 1e-05, positionNormalise = TRUE )
ages |
A vector of ages provided in years before 1950. |
ageSds |
A vector of 1-sigma values for the ages given above |
positions |
Position values (e.g. depths) for each age. In the case of layers of non-zero thickness, this should be the middle value of the slice |
positionThicknesses |
Thickness values for each of the positions. The thickness value should be the full thickness value of the slice. By default set to zero. |
calCurves |
A vector of values containing either |
ids |
ID names for each age |
outlierProbs |
A vector of prior outlier probabilities, one for each age. Defaults to 0.01 |
predictPositions |
A vector of positions (e.g. depths) at which predicted age values are required. Defaults to a sequence of length 100 from the top position to the bottom position |
pathToCalCurves |
File path to where the calibration curves are located. Defaults to the system directory where the 3 standard calibration curves are stored. |
artificialThickness |
Amount to add to the thickness values in the case of equal positions with no |
allowOutside |
Whether to allow calibrations to run outside the range of the calibration curve. By default this is turned off as calibrations outside of the range of the calibration curve can cause severe issues with probability ranges of calibrated dates |
iterations |
The number of iterations to run the procedure for |
burn |
The number of starting iterations to discard |
thin |
The step size for every iteration to keep beyond the burn-in |
extractDate |
The top age of the core. Used for extrapolation purposes so that no extrapolated ages go beyond the top age of the core. Defaults to the current year |
maxExtrap |
The maximum number of extrapolations to perform before giving up and setting the predicted ages to NA. Useful for when large amounts of extrapolation are required, i.e. some of the |
thetaStart |
A set of starting values for the calendar ages estimated by Bchron. If NULL uses a function to estimate the ages. These should be in the same units as the posterior ages required. See example below for usage. |
thetaMhSd |
The Metropolis-Hastings standard deviation for the age parameters |
muMhSd |
The Metropolis-Hastings standard deviation for the Compound Poisson-Gamma mean |
psiMhSd |
The Metropolis-Hastings standard deviation for the Compound Poisson-Gamma scale |
ageScaleVal |
A scale value for the ages. |
positionEps |
A small value used to check whether simulated positions are far enough apart to avoid numerical underflow errors. If errors occur in model runs (e.g. |
positionNormalise |
Whether to normalise the position values. |
The Bchronology
function fits a compound Poisson-Gamma distribution to the increments between the dated levels. This involves a stochastic linear interpolation step where the age gaps are Gamma distributed, and the position gaps are Exponential. Radiocarbon and non-radiocarbon dates (including outliers) are updated within the function also by MCMC.
A list of class BchronologyRun
which include elements:
theta |
The posterior estimated values of the ages |
phi |
The posterior estimated outlier values (1=outlier, 2=not outlier). The means of this parameter give the posterior estimated outlier probabilities |
mu |
The posterior values of the Compound Poisson-Gamma mean |
psi |
The posterior values of the Compound Poisson-Gamma scale |
thetaPredict |
The posterior estimated ages for each of the values in predictPosition |
predictPositions |
The positions at which estimated ages were required |
calAges |
The calibrated ages as output from |
inputVals |
All of the input values to the |
Haslett, J., and Parnell, A. C. (2008). A simple monotone process with application to radiocarbon-dated depth chronologies. Journal of the Royal Statistical Society, Series C, 57, 399-418. DOI:10.1111/j.1467-9876.2008.00623.x Parnell, A. C., Haslett, J., Allen, J. R. M., Buck, C. E., and Huntley, B. (2008). A flexible approach to assessing synchroneity of past events using Bayesian reconstructions of sedimentation history. Quaternary Science Reviews, 27(19-20), 1872-1885. DOI:10.1016/j.quascirev.2008.07.009
BchronCalibrate
, BchronRSL
, BchronDensity
, BchronDensityFast
# Data from Glendalough data(Glendalough) # Run in Bchronology - all but first age uses intcal20 GlenOut <- with( Glendalough, Bchronology( ages = ages, ageSds = ageSds, calCurves = calCurves, positions = position, positionThicknesses = thickness, ids = id, predictPositions = seq(0, 1500, by = 10) ) ) # Summarise it a few different ways summary(GlenOut) # Default is for quantiles of ages at predictPosition values summary(GlenOut, type = "convergence") # Check model convergence summary(GlenOut, type = "outliers") # Look at outlier probabilities # Predict for some new positions predictAges <- predict(GlenOut, newPositions = c(150, 725, 1500), newPositionThicknesses = c(5, 0, 20) ) # Plot the output plot(GlenOut) + ggplot2::labs( title = "Glendalough", xlab = "Age (cal years BP)", ylab = "Depth (cm)" ) # If you need to specify your own starting values startingAges <- c(0, 2000, 10000, 11000, 13000, 13500) GlenOut <- with( Glendalough, Bchronology( ages = ages, ageSds = ageSds, calCurves = calCurves, positions = position, positionThicknesses = thickness, ids = id, predictPositions = seq(0, 1500, by = 10), thetaStart = startingAges ) )
# Data from Glendalough data(Glendalough) # Run in Bchronology - all but first age uses intcal20 GlenOut <- with( Glendalough, Bchronology( ages = ages, ageSds = ageSds, calCurves = calCurves, positions = position, positionThicknesses = thickness, ids = id, predictPositions = seq(0, 1500, by = 10) ) ) # Summarise it a few different ways summary(GlenOut) # Default is for quantiles of ages at predictPosition values summary(GlenOut, type = "convergence") # Check model convergence summary(GlenOut, type = "outliers") # Look at outlier probabilities # Predict for some new positions predictAges <- predict(GlenOut, newPositions = c(150, 725, 1500), newPositionThicknesses = c(5, 0, 20) ) # Plot the output plot(GlenOut) + ggplot2::labs( title = "Glendalough", xlab = "Age (cal years BP)", ylab = "Depth (cm)" ) # If you need to specify your own starting values startingAges <- c(0, 2000, 10000, 11000, 13000, 13500) GlenOut <- with( Glendalough, Bchronology( ages = ages, ageSds = ageSds, calCurves = calCurves, positions = position, positionThicknesses = thickness, ids = id, predictPositions = seq(0, 1500, by = 10), thetaStart = startingAges ) )
Relative sea level rate (RSL) estimation
BchronRSL( BchronologyRun, RSLmean, RSLsd, degree = 1, iterations = 10000, burn = 2000, thin = 8 )
BchronRSL( BchronologyRun, RSLmean, RSLsd, degree = 1, iterations = 10000, burn = 2000, thin = 8 )
BchronologyRun |
Output from a run of |
RSLmean |
A vector of RSL mean estimates of the same length as the number of predictPositions given to the |
RSLsd |
A vector RSL standard deviations of the same length as the number of predictPositions given to the |
degree |
The degree of the polynomial regression: linear=1 (default), quadratic=2, etc. Supports up to degree 5, though this will depend on the data given |
iterations |
The number of MCMC iterations to run |
burn |
The number of starting iterations to discard |
thin |
The step size of iterations to discard |
This function fits an errors-in-variables regression model to relative sea level (RSL) data. An errors-in-variables regression model allows for uncertainty in the explanatory variable, here the age of sea level data point. The algorithm is more fully defined in the reference below
An object of class BchronRSLRun with elements itemize
Andrew C. Parnell and W. Roland Gehrels (2013) 'Using chronological models in late holocene sea level reconstructions from salt marsh sediments' In: I. Shennan, B.P. Horton, and A.J. Long (eds). Handbook of Sea Level Research. Chichester: Wiley
BchronCalibrate
, Bchronology
, BchronDensity
, BchronDensityFast
# Load in data data(TestChronData) data(TestRSLData) # Run through Bchronology RSLrun <- with(TestChronData, Bchronology( ages = ages, ageSds = ageSds, positions = position, positionThicknesses = thickness, ids = id, calCurves = calCurves, predictPositions = TestRSLData$Depth )) # Now run through BchronRSL RSLrun2 <- BchronRSL(RSLrun, RSLmean = TestRSLData$RSL, RSLsd = TestRSLData$Sigma, degree = 3) # Summarise it summary(RSLrun2) # Plot it plot(RSLrun2)
# Load in data data(TestChronData) data(TestRSLData) # Run through Bchronology RSLrun <- with(TestChronData, Bchronology( ages = ages, ageSds = ageSds, positions = position, positionThicknesses = thickness, ids = id, calCurves = calCurves, predictPositions = TestRSLData$Depth )) # Now run through BchronRSL RSLrun2 <- BchronRSL(RSLrun, RSLmean = TestRSLData$RSL, RSLsd = TestRSLData$Sigma, degree = 3) # Summarise it summary(RSLrun2) # Plot it plot(RSLrun2)
This function finds, for a given current chronology, created via
Bchronology
, which positions (depths) to date next
If N = 1 it just finds the position with the biggest uncertainty
If N>1 it puts a date at the N = 1 position and re-runs
Bchronology
with the extra psuedo date. It uses the
unCalibrate
function with the un-calibrated age estimated
at the median of the chronology and the sd as specified via the
newSds
argument. Other arguments specify the new thicknesses,
calibration curves, and outlier probabilities for newly inserted psuedo-dates.
choosePositions( bchrRun, N = 1, newSds = 30, newThicknesses = 0, positions = bchrRun$predictPositions, newCalCurve = "intcal20", newOutlierProb = 0.05, level = 0.5, plot = TRUE, count = 1, linesAt = NULL )
choosePositions( bchrRun, N = 1, newSds = 30, newThicknesses = 0, positions = bchrRun$predictPositions, newCalCurve = "intcal20", newOutlierProb = 0.05, level = 0.5, plot = TRUE, count = 1, linesAt = NULL )
bchrRun |
A run of the current chronology as output from |
N |
The number of new positions required |
newSds |
The new standard deviations of the psuedo-added dates |
newThicknesses |
The new thicknesses of the psuedo-added dates |
positions |
The positions allowed to estimate the new positions to date. Defaults to the value of |
newCalCurve |
The new calibration curve of the psuedo-added dates |
newOutlierProb |
The new outlier probabilities of the psuedo-added dates |
level |
The confidence level required for minimising the uncertainty. Defaults to 50%. (Note: this will be estimated more robustly than the 95% level) |
plot |
Whether to plot the chronologies as they are produced |
count |
Counter function (not for use other than by the function itself) |
linesAt |
Horizontal line positions (not for use other than by the function itself) |
Some plots and the positions to date next
Bchronology
for the main function to create chronologies, unCalibrate
for the ability to invert calendar dates for a given calibration curve.
data(Glendalough) GlenOut <- Bchronology( ages = Glendalough$ages, ageSds = Glendalough$ageSds, calCurves = Glendalough$calCurves, positions = Glendalough$position, positionThicknesses = Glendalough$thickness, ids = Glendalough$id, predictPositions = seq(0, 1500, by = 10) ) # Find out which two positions (depths) to date if we have room for two more dates # Here going to choose 3 new positions to date newPositions <- choosePositions(GlenOut, N = 3) print(newPositions) # Suppose you are only interested in dating the new depths at 500, 600, or 700 cm newPositions2 <- choosePositions(GlenOut, N = 2, positions = seq(500, 700, by = 10) ) print(newPositions2)
data(Glendalough) GlenOut <- Bchronology( ages = Glendalough$ages, ageSds = Glendalough$ageSds, calCurves = Glendalough$calCurves, positions = Glendalough$position, positionThicknesses = Glendalough$thickness, ids = Glendalough$id, predictPositions = seq(0, 1500, by = 10) ) # Find out which two positions (depths) to date if we have room for two more dates # Here going to choose 3 new positions to date newPositions <- choosePositions(GlenOut, N = 3) print(newPositions) # Suppose you are only interested in dating the new depths at 500, 600, or 700 cm newPositions2 <- choosePositions(GlenOut, N = 2, positions = seq(500, 700, by = 10) ) print(newPositions2)
This function takes as input two Bchronology
runs and compares the uncertainty intervals. It does this by
computing the mean uncertainty across the core (type = 'mean'
) at a specified percentile level (e.g. 95%) and
subsequently reporting the reduction/increase in uncertainty between the two runs. Both cores must
have the same set of depths/positions at regular intervals.
coreInfluence( bchrRun1, bchrRun2, percentile = 0.95, type = c("plot", "summary", "max"), ageTolerance = 500, ... )
coreInfluence( bchrRun1, bchrRun2, percentile = 0.95, type = c("plot", "summary", "max"), ageTolerance = 500, ... )
bchrRun1 |
The output of a run of the |
bchrRun2 |
The output of another run of the |
percentile |
The value of the percentile to compare the uncertainties. Default is 95% |
type |
if |
ageTolerance |
A value in years for which to report the positions at which the reduction in uncertainty exceeds this value. |
... |
Additional arguments to plot |
For example, if the ageTolerance
value is 500 years, then coreInfluence
will return all of the positions at
which the uncertainty reduction is bigger than 500.
Depending on type will outputs some text and plots providing the influence values for the cores in question.
Bchronology
, choosePositions
, dateInfluence
for finding the influence of removing a single date from a core
data(Glendalough) # Start with a run that remove two dates GlenOut1 <- Bchronology( ages = Glendalough$ages[-c(3:4)], ageSds = Glendalough$ageSds[-c(3:4)], calCurves = Glendalough$calCurves[-c(3:4)], positions = Glendalough$position[-c(3:4)], positionThicknesses = Glendalough$thickness[-c(3:4)], ids = Glendalough$id[-c(3:4)], predictPositions = seq(0, 1500, by = 10) ) GlenOut2 <- Bchronology( ages = Glendalough$ages, ageSds = Glendalough$ageSds, calCurves = Glendalough$calCurves, positions = Glendalough$position, positionThicknesses = Glendalough$thickness, ids = Glendalough$id, predictPositions = seq(0, 1500, by = 10) ) # Now compare their influence coreInfluence(GlenOut1, GlenOut2, type = c("max", "plot"), xlab = "Age (cal years BP)", ylab = "Depth (cm)", main = "Chronology difference at 95% for Glendalough removing two dates", las = 1 )
data(Glendalough) # Start with a run that remove two dates GlenOut1 <- Bchronology( ages = Glendalough$ages[-c(3:4)], ageSds = Glendalough$ageSds[-c(3:4)], calCurves = Glendalough$calCurves[-c(3:4)], positions = Glendalough$position[-c(3:4)], positionThicknesses = Glendalough$thickness[-c(3:4)], ids = Glendalough$id[-c(3:4)], predictPositions = seq(0, 1500, by = 10) ) GlenOut2 <- Bchronology( ages = Glendalough$ages, ageSds = Glendalough$ageSds, calCurves = Glendalough$calCurves, positions = Glendalough$position, positionThicknesses = Glendalough$thickness, ids = Glendalough$id, predictPositions = seq(0, 1500, by = 10) ) # Now compare their influence coreInfluence(GlenOut1, GlenOut2, type = c("max", "plot"), xlab = "Age (cal years BP)", ylab = "Depth (cm)", main = "Chronology difference at 95% for Glendalough removing two dates", las = 1 )
A function for creating a new calibration curve not already available in Bchron
createCalCurve( name, calAges, uncalAges, oneSigma = rep(0, length(calAges)), pathToCalCurves = getwd(), createFile = TRUE )
createCalCurve( name, calAges, uncalAges, oneSigma = rep(0, length(calAges)), pathToCalCurves = getwd(), createFile = TRUE )
name |
The name of the new calibration curve |
calAges |
A vector of the calendar/calibrated ages in years before present |
uncalAges |
A vector of values of uncalibrated ages in appropriate units (e.g. 14C years BP) |
oneSigma |
The one sigma (one standard deviation) values in uncalibrated units. If left blank it assumes these are all zero |
pathToCalCurves |
The path to the calibration curves. Will write by default to the working directory |
createFile |
whether to write out the new file or not. Only turned off for testing purposes |
All calibration curves are stored by Bchron in the standard R gzipped text format. You can find the location of the calibration curves by typing system.file('data',package='Bchron')
. Any created calibration curve will be converted to this format. However R packages are not allowed to write to this directory so it is up to the user to put the resulting calibration curve file in the appropriate directory. It can then be used as in the examples below. However note that re-installing Bchron will likely over-write previously created calibration curves so you should make sure to store the code used to create it. As a short-cut to copying it by hand you can instead use the file.copy
command in the example below.
## Not run: # Load in the calibration curve with: intcal09 <- read.table(system.file("extdata/intcal09.14c", package = "Bchron"), sep = ",") # Run createCalCurve createCalCurve( name = "intcal09", calAges = intcal09[, 1], uncalAges = intcal09[, 2], oneSigma = intcal09[, 3] ) # Copy the file to the right place file.copy( from = "intcal09.rda", to = system.file("data", package = "Bchron"), overwrite = TRUE ) # Only need this if you've run it more than once # Calibrate the ages under two calibration curves age_09 <- BchronCalibrate( ages = 15500, ageSds = 150, calCurves = "intcal09", ids = "My Date", pathToCalCurves = getwd() ) age_20 <- BchronCalibrate(ages = 15500, ageSds = 150, calCurves = "intcal20") # Finally plot the difference library(ggplot2) plot(age_09) + geom_line( data = as.data.frame(age_20$Date1), aes(x = ageGrid, y = densities), col = "red" ) + ggtitle("Intcal09 vs Intcal20") ## End(Not run)
## Not run: # Load in the calibration curve with: intcal09 <- read.table(system.file("extdata/intcal09.14c", package = "Bchron"), sep = ",") # Run createCalCurve createCalCurve( name = "intcal09", calAges = intcal09[, 1], uncalAges = intcal09[, 2], oneSigma = intcal09[, 3] ) # Copy the file to the right place file.copy( from = "intcal09.rda", to = system.file("data", package = "Bchron"), overwrite = TRUE ) # Only need this if you've run it more than once # Calibrate the ages under two calibration curves age_09 <- BchronCalibrate( ages = 15500, ageSds = 150, calCurves = "intcal09", ids = "My Date", pathToCalCurves = getwd() ) age_20 <- BchronCalibrate(ages = 15500, ageSds = 150, calCurves = "intcal20") # Finally plot the difference library(ggplot2) plot(age_09) + geom_line( data = as.data.frame(age_20$Date1), aes(x = ageGrid, y = densities), col = "red" ) + ggtitle("Intcal09 vs Intcal20") ## End(Not run)
This function takes as input a Bchronology
run and allows the user to estimate a value of 'influence' for either a particular date (by name or number), for all dates in a core (whichDate = 'all'
), or for all internal dates (whichDate = 'internal'
). It measures the influence by either the Kullback-Leibler divergence (KL
), the absolute mean difference (absMeanDiff
), or the absolute median difference (absMedianDiff
).
dateInfluence( bchrRun, whichDate = "all", measure = c("KL", "absMeanDiff", "absMedianDiff") )
dateInfluence( bchrRun, whichDate = "all", measure = c("KL", "absMeanDiff", "absMedianDiff") )
bchrRun |
The output of a run of the |
whichDate |
The chosen date to remove. Either |
measure |
Either |
The KL
measure is preferred as it takes account of the full probability distributions but it lacks a simple interpretation. The best way to use it is with whichDate = 'all'
: the largest value corresponds to the most influential date in the chronology. For simpler interpretation use measure = 'absMeanDiff'
or measure = 'absMedianDiff'
as for these the influence is measured in years.
When the predictPositions from the original Bchronology
run do not include those of the date(s) being left out then the function uses the closest position and reports the change.
Outputs some text providing the influence values for the date(s) in question. If given an assignment value also return a list containing all the probability distributions.
Bchronology
, summary.BchronologyRun
, coreInfluence
, choosePositions
data(Glendalough) GlenOut <- Bchronology( ages = Glendalough$ages, ageSds = Glendalough$ageSds, calCurves = Glendalough$calCurves, positions = Glendalough$position, positionThicknesses = Glendalough$thickness, ids = Glendalough$id, predictPositions = seq(0, 1500, by = 10) ) dateInfluence(GlenOut, whichDate = 4, measure = "absMeanDiff")
data(Glendalough) GlenOut <- Bchronology( ages = Glendalough$ages, ageSds = Glendalough$ageSds, calCurves = Glendalough$calCurves, positions = Glendalough$position, positionThicknesses = Glendalough$thickness, ids = Glendalough$id, predictPositions = seq(0, 1500, by = 10) ) dateInfluence(GlenOut, whichDate = 4, measure = "absMeanDiff")
Chronology data for Glendalough data set
data(Glendalough)
data(Glendalough)
A data frame with 6 observations on the following 6 variables:
id
ID of each age
ages
Age in (14C) years BP
ageSds
Age standard deviations
position
Depths in cm
thickness
Thicknesses in cm
calCurves
Calibration curve for each age
This Glendalough data can be used with Bchronology
or BchronDensity
Haslett, J., Whiley, M., Bhattacharya, S., Mitchell, F. J. G., Allen, J. R. M., Huntley, B., \& Salter-Townshend, M. (2006). Bayesian palaeoclimate reconstruction. Journal of the Royal Statistical Society, Series A, 169, 395-438.
A function for computing highest density regions (HDRs)
hdr(date, prob = 0.95)
hdr(date, prob = 0.95)
date |
A calibrated Bchron date, via e.g. |
prob |
The desired probability interval, in the range(0, 1) |
The output of this function is a list of contiguous ranges which cover the probability interval requested. A highest density region might have multiple such ranges if the calibrated date is multi-modal. These differ from credible intervals, which are always contiguous but will not be a good representation of a multi-modal probability distribution.
A list where each element is one of the contiguous sets making up the HDR
# Calibrate multiple ages and summarise them ages <- BchronCalibrate( ages = 11553, ageSds = 230, calCurves = "intcal20" ) # Get samples hdr(ages$Date1)
# Calibrate multiple ages and summarise them ages <- BchronCalibrate( ages = 11553, ageSds = 230, calCurves = "intcal20" ) # Get samples hdr(ages$Date1)
Northern hemisphere 2013 calibration curve. The first 3 columns are the calibrated age (in years BP), the radiocarbon age (in 14C years BP), and the 1 sigma standard error (also in 14C years BP).
data(intcal13)
data(intcal13)
A data frame with 5141 observations on 5 variables.
For full details and reference see http://intcal.org/blurb.html. For usage details see BchronCalibrate
Northern hemisphere 2020 calibration curve. The first 3 columns are the calibrated age (in years BP), the radiocarbon age (in 14C years BP), and the 1 sigma standard error (also in 14C years BP).
data(intcal20)
data(intcal20)
A data frame with 9501 observations on 5 variables.
For full details and reference see http://intcal.org/blurb.html. For usage details see BchronCalibrate
Marine 2013 calibration curve. The first 3 columns are the calibrated age (in years BP), the radiocarbon age (in 14C years BP), and the 1 sigma standard error (also in 14C years BP).
data(marine13)
data(marine13)
A data frame with 4801 observations on 5 variables
For full details and reference see http://intcal.org/blurb.html. For usage details see BchronCalibrate
Marine 2020 calibration curve. The first 3 columns are the calibrated age (in years BP), the radiocarbon age (in 14C years BP), and the 1 sigma standard error (also in 14C years BP).
data(marine20)
data(marine20)
A data frame with 5501 observations on 5 variables
For full details and reference see http://intcal.org/blurb.html. For usage details see BchronCalibrate
Data for dummy calibration of normally distributed ages
data(normal)
data(normal)
A data frame with 2 observations on 3 variables.
This is dummy data so that BchronCalibrate
can calibrate normally distributed dates.
Plots calibrated radiocarbon dates from a BchronCalibrate
run. Has options to plot on a position (usually depth) scale if supplied with the original run
## S3 method for class 'BchronCalibratedDates' plot( x, date = NULL, withPositions = ifelse(length(x) > 1 & !is.null(x[[1]]$positions) & !includeCal, TRUE, FALSE), includeCal = FALSE, dateHeight = 100, dateLabels = TRUE, dateLabelSize = 2, nudgeX = 0, nudgeY = 0, fillCol = rgb(47/255, 79/255, 79/255, 0.5), withHDR = TRUE, ageScale = c("bp", "bc", "b2k"), scaleReverse = TRUE, pathToCalCurves = system.file("data", package = "Bchron"), ... )
## S3 method for class 'BchronCalibratedDates' plot( x, date = NULL, withPositions = ifelse(length(x) > 1 & !is.null(x[[1]]$positions) & !includeCal, TRUE, FALSE), includeCal = FALSE, dateHeight = 100, dateLabels = TRUE, dateLabelSize = 2, nudgeX = 0, nudgeY = 0, fillCol = rgb(47/255, 79/255, 79/255, 0.5), withHDR = TRUE, ageScale = c("bp", "bc", "b2k"), scaleReverse = TRUE, pathToCalCurves = system.file("data", package = "Bchron"), ... )
x |
Output from |
date |
Either numbers or date names to plot (only used if multiple dates have been calibrated) |
withPositions |
Whether to plot with positions (i.e. using the position values as the y axis). By default TRUE if |
includeCal |
Whether to plot the date alongside the calibration curve (with 95% uncertainty bands) and the normally distributed uncalibrated date. |
dateHeight |
The height of the dates in the plot in the same units as the position values. Only relevant if |
dateLabels |
Whether to add the names of the dates to the left of them. Default TRUE |
dateLabelSize |
Size of the date labels |
nudgeX |
The amount to move the date labels in the x direction. Can be negative. See |
nudgeY |
The amount to move the date labels in the y direction. Can be negative. See |
fillCol |
A colour to fill the date densities when |
withHDR |
Whether to plot the 95% highest density region values |
ageScale |
Either |
scaleReverse |
Whether to reverse the x-axis scale. Defaults to TRUE which works best for dates presented in e.g. years BP |
pathToCalCurves |
The Bchron path to calibration curves. Defaults to the package location might need to be set to another folder if user defined calibration curves are being used |
... |
Other arguments to plot (currently ignored) |
These plots are intended to be pretty basic and used simply for quick information. Users are encouraged to learn the R plotting features to produce publication quality graphics
BchronCalibrate
, Bchronology
, BchronRSL
, BchronDensity
, BchronDensityFast
BchronDensity
Plot output from BchronDensity
## S3 method for class 'BchronDensityRun' plot( x, plotDates = TRUE, plotRawSum = FALSE, plotPhase = TRUE, phaseProb = 0.95, dateTransparency = 0.4, ... )
## S3 method for class 'BchronDensityRun' plot( x, plotDates = TRUE, plotRawSum = FALSE, plotPhase = TRUE, phaseProb = 0.95, dateTransparency = 0.4, ... )
x |
Output from |
plotDates |
Whether to plot the individual calibrated dates |
plotRawSum |
Whether to plot the raw sum of the probability distributions |
plotPhase |
Whether to plot the phase values |
phaseProb |
The probability value for the phase identification |
dateTransparency |
The transparency value for the dates (default 0.4) |
... |
Other graphical commands. See |
See BchronDensity
for examples, also Bchronology
, BchronRSL
, and BchronDensityFast
for a faster approximate version of this function
BchronDensityFast
Plots output from BchronDensityFast
## S3 method for class 'BchronDensityRunFast' plot(x, plotDates = TRUE, plotSum = FALSE, dateTransparency = 0.4, ...)
## S3 method for class 'BchronDensityRunFast' plot(x, plotDates = TRUE, plotSum = FALSE, dateTransparency = 0.4, ...)
x |
Output from |
plotDates |
Whether to include individual age pdfs (default TRUE) |
plotSum |
Whether to include sum of age pdfs (default FALSE) |
dateTransparency |
The transparency value for the dates (default 0.4) |
... |
Other graphical parameters, see |
Creates a basic plot of output for a run of BchronDensityFast
Examples in BchronDensityFast
, and see BchronDensity
, for a slower, more accurate version of this function
Plots output from a run of Bchronology
## S3 method for class 'BchronologyRun' plot( x, dateHeight = 100, dateLabels = TRUE, dateLabelSize = 2, dateCol = rgb(47/255, 79/255, 79/255, 0.5), chronCol = "deepskyblue4", chronTransparency = 0.75, alpha = 0.95, nudgeX = 0, nudgeY = 0, expandX = if (dateLabels) { c(0.1, 0) } else { c(0, 0) }, expandY = c(0.05, 0), ageScale = c("bp", "bc", "b2k"), scaleReverse = TRUE, ... )
## S3 method for class 'BchronologyRun' plot( x, dateHeight = 100, dateLabels = TRUE, dateLabelSize = 2, dateCol = rgb(47/255, 79/255, 79/255, 0.5), chronCol = "deepskyblue4", chronTransparency = 0.75, alpha = 0.95, nudgeX = 0, nudgeY = 0, expandX = if (dateLabels) { c(0.1, 0) } else { c(0, 0) }, expandY = c(0.05, 0), ageScale = c("bp", "bc", "b2k"), scaleReverse = TRUE, ... )
x |
The object created by |
dateHeight |
The height of the dates in the plot (on the same scale as the positions) |
dateLabels |
Whether to label the dates on the vertical axis (default TRUE) |
dateLabelSize |
The size of the date labels |
dateCol |
The colour of the date labels |
chronCol |
The colour of the chronology uncertainty ribbon to be plotted |
chronTransparency |
The amount of transparency for the chronology ribbon |
alpha |
The credible interval of the chronology run to be plotted. Defaults to 95 percent |
nudgeX |
The amount to move the date labels in the x direction. Can be negative. See |
nudgeY |
The amount to move the date labels in the y direction. Can be negative. See |
expandX |
The amount to expand the horizontal axis in case part are missed off the plot. See |
expandY |
The amount to expand the vertical axis in case part are missed off the plot. See |
ageScale |
Either |
scaleReverse |
Whether to reverse the x-axis scale. Defaults to TRUE which works best for dates presented in e.g. years BP |
... |
Other arguments to plot (currently ignored) |
Creates a simple plot of the chronology output. The height of the date densities in the plots can be manipulated via the dateHeight
argument which is represented in the same units as the positions/depths provided. More detailed plots can be created by manipulating the Bchronology object as required.
For examples see Bchronology
. Also BchronCalibrate
, BchronRSL
, BchronDensity
, BchronDensityFast
Plot output from the BchronRSL
function
## S3 method for class 'BchronRSLRun' plot( x, type = c("RSL", "rate", "accel"), alpha = 0.95, ellipseCol = "darkslategray", lineCol = "deepskyblue4", ... )
## S3 method for class 'BchronRSLRun' plot( x, type = c("RSL", "rate", "accel"), alpha = 0.95, ellipseCol = "darkslategray", lineCol = "deepskyblue4", ... )
x |
An object created by |
type |
One of |
alpha |
confidence level used for plotting ellipses |
ellipseCol |
The colour of the ellipse used for plotting dates |
lineCol |
The colour of the sea level curve lines |
... |
Other arguments to plot (currently ignored) |
BchronCalibrate
, Bchronology
, BchronRSL
, BchronDensity
, BchronDensityFast
This function will predict the ages of new positions (usually depths) based on a previous run of the function Bchronology
. It will also allow for thickness uncertainties to be included in the resulting ages, for example when the age of a particular event is desired
## S3 method for class 'BchronologyRun' predict( object, newPositions, newPositionThicknesses = NULL, maxExtrap = 500, ... )
## S3 method for class 'BchronologyRun' predict( object, newPositions, newPositionThicknesses = NULL, maxExtrap = 500, ... )
object |
Output from a run of |
newPositions |
A vector of new positions at which to find ages |
newPositionThicknesses |
A vector of thicknesses for the above positions. Must be the same length as |
maxExtrap |
The maximum new of extrapolation attempts. It might be worth increasing this if you are extrapolating a long way from the other dated positions |
... |
Other arguments to predict (not currently supported) |
A matrix of dimension num_samples by num_positions so that each row represents a set of monotonic sample predicted ages
BchronCalibrate
, Bchronology
BchronRSL
, BchronDensity
, BchronDensityFast
A function for extracting sample ages from Bchron calibrated dates
sampleAges(CalDates, n_samp = 10000)
sampleAges(CalDates, n_samp = 10000)
CalDates |
A list created from either |
n_samp |
The desired number of samples |
Sometimes it is useful to have a set of sample calendar ages for your calibrated dates. For example the samples might be required to create a credible/confidence interval, or to create another non-trivial function of calibrated dates, such as differences. By default the BchronCalibrate
function provides a grid of ages and an associated density, similar to OxCal. This function extracts that information and uses the sample
function to output the desired number of samples
A vector of length n_samp
containing sample ages for the specified date
# Calibrate multiple ages and summarise them ages <- BchronCalibrate( ages = c(3445, 11553, 7456), ageSds = c(50, 230, 110), calCurves = c("intcal20", "intcal20", "shcal20") ) # Get samples age_samples <- sampleAges(ages) # Create a credible interval and the median for each date apply(age_samples, 2, quantile, probs = c(0.05, 0.5, 0.95))
# Calibrate multiple ages and summarise them ages <- BchronCalibrate( ages = c(3445, 11553, 7456), ageSds = c(50, 230, 110), calCurves = c("intcal20", "intcal20", "shcal20") ) # Get samples age_samples <- sampleAges(ages) # Create a credible interval and the median for each date apply(age_samples, 2, quantile, probs = c(0.05, 0.5, 0.95))
Southern hemisphere 2013 calibration curve. The first 3 columns are the calibrated age (in years BP), the radiocarbon age (in 14C years BP), and the 1 sigma standard error (also in 14C years BP).
data(shcal13)
data(shcal13)
A data frame with 5141 observations on 5 variables.
For full details and reference see http://intcal.org/blurb.html. For usage details see BchronCalibrate
Southern hemisphere 2020 calibration curve. The first 3 columns are the calibrated age (in years BP), the radiocarbon age (in 14C years BP), and the 1 sigma standard error (also in 14C years BP).
data(shcal20)
data(shcal20)
A data frame with 9501 observations on 5 variables.
For full details and reference see http://intcal.org/blurb.html. For usage details see BchronCalibrate
Chronology data for Sluggan Moss data set
data(Sluggan)
data(Sluggan)
A data frame with 31 observations on the following 6 variables:
id
ID of each age
ages
Age in (14C) years BP
ageSds
Age standard deviations
position
Depths in cm
thickness
Thicknesses in cm
calCurves
Calibration curve for each age
This Sluggan Moss data can be downloaded from the European Pollen Database: http://www.europeanpollendatabase.net. For usage see Bchronology
or BchronDensity
Smith, A. G., \& Goddard, I. C. (1991). A 12,500 year record of vegetational history at Sluggan Bog, Co. Antrim, N. Ireland (incorporating a pollen zone scheme for the non-specialist). New Phytologist, 118, 167-187.
Produces summary output from a BchronCalibrate
run, including the highest density regions for the calibrated ages for given probability levels
## S3 method for class 'BchronCalibratedDates' summary(object, prob = 95, ..., digits = max(3, getOption("digits") - 3))
## S3 method for class 'BchronCalibratedDates' summary(object, prob = 95, ..., digits = max(3, getOption("digits") - 3))
object |
The output of a run of |
prob |
A percentage value (between 0 and 100) at which the highest density regions for each age are calculated |
... |
Further arguments (not currently supported) |
digits |
Significant digits to display (not currently supported) |
BchronCalibrate
, Bchronology
, BchronRSL
, BchronDensity
, BchronDensityFast
Summarise a BchronDensity
object
## S3 method for class 'BchronDensityRun' summary(object, prob = 0.95, ..., digits = max(3, getOption("digits") - 3))
## S3 method for class 'BchronDensityRun' summary(object, prob = 0.95, ..., digits = max(3, getOption("digits") - 3))
object |
Output from a run of |
prob |
Probability for identifying phases |
... |
Other arguments (not currently supported) |
digits |
Number of digits to report values |
Summarise a Bchronology
object
## S3 method for class 'BchronologyRun' summary( object, type = c("quantiles", "outliers", "convergence", "sed_rate", "acc_rate", "max_var"), probs = c(0.025, 0.25, 0.5, 0.75, 0.975), useExisting = TRUE, numPos = 3, ..., digits = max(3, getOption("digits") - 3) )
## S3 method for class 'BchronologyRun' summary( object, type = c("quantiles", "outliers", "convergence", "sed_rate", "acc_rate", "max_var"), probs = c(0.025, 0.25, 0.5, 0.75, 0.975), useExisting = TRUE, numPos = 3, ..., digits = max(3, getOption("digits") - 3) )
object |
Output from a run of |
type |
Type of output required. The default (quantiles) gives the quantiles of the ages for each position in |
probs |
Probabilities (between 0 and 1) at which to summarise the predicted chronologies |
useExisting |
Whether to use the predicted chronologies/positions to calculate the sedimentation rate (if TRUE - default) or to re-create them based on a unit-scaled position grid (if FALSE). The latter will be a little bit slower but will provide better sedimentation rate estimates if the original positions are not on a unit scale (e.g. each cm) |
numPos |
The number of positions at which to provide the maximum variance |
... |
Other arguments (not currently supported) |
digits |
Number of digits to report values |
BchronCalibrate
, Bchronology
BchronRSL
, BchronDensity
, BchronDensityFast
Summarise a BchronRSL
run
## S3 method for class 'BchronRSLRun' summary( object, type = c("parameters", "RSL", "rate", "accel"), age_grid = NULL, ... )
## S3 method for class 'BchronRSLRun' summary( object, type = c("parameters", "RSL", "rate", "accel"), age_grid = NULL, ... )
object |
The output from a run of |
type |
One of |
age_grid |
An optional age grid for computing RSL, rate, or acceleration estimates. If not provided uses the age range of the Bchronology run |
... |
Other arguments to functions (not currently implemented) |
BchronCalibrate
, Bchronology
, BchronRSL
, BchronDensity
, BchronDensityFast
Some example chronology data for use with the BchronRSL
function
data(TestChronData)
data(TestChronData)
A data frame with 27 observations on the following 6 variables:
id
ID names
ages
Ages in years BP
ageSds
Ages standard deviations in years BP
position
Depths in cm
thickness
Thicknesses in cm
calCurves
Calibration curve for each age
Andrew C. Parnell and W. Roland Gehrels (2013) 'Using chronological models in late holocene sea level reconstructions from salt marsh sediments' In: I. Shennan, B.P. Horton, and A.J. Long (eds). Handbook of Sea Level Research. Chichester: Wiley
A set of relative sea level data for use with BchronRSL
data(TestRSLData)
data(TestRSLData)
A data frame with 24 observations on the following 3 variables:
Depth
Depth in cm
RSL
Relative sea level in m
Sigma
Standard deviation of RSL measurement
Andrew C. Parnell and W. Roland Gehrels (2013) 'Using chronological models in late holocene sea level reconstructions from salt marsh sediments' In: I. Shennan, B.P. Horton, and A.J. Long (eds). Handbook of Sea Level Research. Chichester: Wiley
Uncalibrate a Radiocarbon date
unCalibrate( calAges, calCurve = "intcal20", type = c("samples", "ages"), pathToCalCurves = system.file("data", package = "Bchron"), ... )
unCalibrate( calAges, calCurve = "intcal20", type = c("samples", "ages"), pathToCalCurves = system.file("data", package = "Bchron"), ... )
calAges |
Either a vector of calibrated ages (when |
calCurve |
he calibration curve to use. Only a single calibration curve is currently supported |
type |
Either 'ages' which uncalibrates a calibrated age values without error (i.e. just a lookup on the calibration curve), or a 'samples' which estimates both an uncalibrated mean age and a standard deviation |
pathToCalCurves |
The path to the calibration curve directory. Defaults to the location of the standard calibration curves given in the package |
... |
Other arguments to the |
Either a vector of uncalibrated ages (type = 'ages'
) or a list containing the estimated mean age and standard deviation (type = 'samples'
)
# Single version outputting just an uncalibrated age unCalibrate(2350, type = "ages") # Vector version giving a vector of uncalibrated ages unCalibrate( calAge = c(2350, 4750, 11440), calCurve = "shcal20", type = "ages" ) # A version where calibrated standard deviations are required too calAge <- BchronCalibrate( ages = 11255, ageSds = 25, calCurves = "intcal20" ) calSampleAges <- sampleAges(calAge) # Uncalibrate the above unCalibrate(calSampleAges, type = "samples" )
# Single version outputting just an uncalibrated age unCalibrate(2350, type = "ages") # Vector version giving a vector of uncalibrated ages unCalibrate( calAge = c(2350, 4750, 11440), calCurve = "shcal20", type = "ages" ) # A version where calibrated standard deviations are required too calAge <- BchronCalibrate( ages = 11255, ageSds = 25, calCurves = "intcal20" ) calSampleAges <- sampleAges(calAge) # Uncalibrate the above unCalibrate(calSampleAges, type = "samples" )