Back to main
documentation
Back to
QuantWealth app
This method estimates prestige values for grave good types based on the Total Object Types (TOT) found in graves containing each item.
Example data: This document uses Corded Ware Culture grave goods from Moravia (N = 82 graves) from Nørtoft (2022), with comparisons to a larger combined dataset from Bohemia and Moravia (N = 419 graves) from Nørtoft & Rohrlach (forthcoming).
Use Empirical Bayes (method of moments prior) from item’s own data. CI determined by observed variance - more consistent TOT = narrower CI.
Use weak (global) prior + linear N-based CI adjustment. More graves searched without finding another occurrence = narrower CI.
Each grave good type gets a prestige value equal to the posterior mean of TOT values from graves containing that item. We use a Gamma-Gamma conjugate model.
Use the method of moments to derive the prior from the item’s own data. This is called Empirical Bayes because the prior is learned from the data itself:
If variance is zero (all identical TOT values), we use V = 0.1 as a small epsilon.
Use a weak prior that is minimally informative:
alpha0 = 0.01
beta0 = 0.001
Observed TOT: 4
Posterior estimate: 4.01
Prior contribution: 0.249%
The prior contributes negligibly to the posterior, so the estimate is essentially equal to the observed value.
For unique items, we have no variance information - just one TOT value. But there’s additional information we’re not using: how many graves did we search?
Example: A spindleWhorl appears in only 1 grave with TOT = 4.
The second case provides more confidence - we searched more graves and still only found the gold ring in that one rich grave.
adjustment = 1 / (1 + N * 0.001)
Adjusted CI width = Bayesian CI width * adjustment
| N | N × 0.001 | Adjustment | CI Narrowing |
|---|---|---|---|
| 50 | 0.050 | 0.952 | 4.8% |
| 82 | 0.082 | 0.924 | 7.6% |
| 100 | 0.100 | 0.909 | 9.1% |
| 200 | 0.200 | 0.833 | 16.7% |
| 419 | 0.419 | 0.705 | 29.5% |
| 500 | 0.500 | 0.667 | 33.3% |
| 1000 | 1.000 | 0.500 | 50% |
Key property: N = 1000 gives 50% narrowing. Our Moravia dataset (N = 82) gives 7.6% narrowing, while the combined Bohemia + Moravia dataset (N = 419) gives 29.5% narrowing.
For items with n >= 2, the Empirical Bayes prior already captures confidence through the observed variance. Adding an N-based adjustment would be redundant.
Clean separation of concerns:
Data: spindleWhorl found in 1 grave with TOT = 4 Dataset: N = 82 graves
Weak prior: alpha0 = 0.01, beta0 = 0.001
Posterior:
Prestige estimate = 4.01 / 1.001 = 4.01
Bayesian 95% CI = [1.09, 8.77] (width = 7.68)
N = 82
Adjustment = 1 / (1 + 82 × 0.001) = 1 / 1.082 = 0.924
Adjusted CI width = 7.68 × 0.924 = 7.1
Adjusted 95% CI = [0.46, 7.55]
CI narrowing: 7.6%
We have two real datasets to compare: - Moravia only: N = 82 graves (Nørtoft (2022)) - Bohemia + Moravia: N = 419 graves (Nørtoft & Rohrlach forthcoming)
The same spindleWhorl appears in both datasets - let’s see how the CI changes.
| Dataset Size | Adjustment | CI Narrowing | Adjusted CI |
|---|---|---|---|
| N = 82 | 0.924 | 7.6% | [0.46, 7.55] |
| N = 419 | 0.705 | 29.5% | [1.3, 6.71] |
N=82 vs N=419: The CI is 31% wider for N=82, reflecting our lower confidence when fewer graves have been searched.
When we find additional occurrences of an item, confidence increases through the Empirical Bayes prior, not through N-based adjustment.
# Demonstrate how CI changes with more observations
# Using unique item TOT value from our example dataset
N_fixed <- N_example
gold_tot_value <- unique_item_tot
cases <- list(
list(y = c(gold_tot_value), desc = "n = 1"),
list(y = c(gold_tot_value, gold_tot_value - 2), desc = "n = 2"),
list(y = c(gold_tot_value, gold_tot_value - 2, gold_tot_value - 1), desc = "n = 3"),
list(y = c(gold_tot_value, gold_tot_value - 2, gold_tot_value - 1, gold_tot_value - 1, gold_tot_value + 1), desc = "n = 5"),
list(y = c(gold_tot_value, gold_tot_value - 2, gold_tot_value - 1, gold_tot_value - 1, gold_tot_value + 1, gold_tot_value - 3, gold_tot_value, gold_tot_value - 2), desc = "n = 8")
)
occ_results <- lapply(cases, function(case) {
n <- length(case$y)
if (n == 1) {
# Use weak prior + N-adjustment (same as app)
alpha0 <- 0.01
beta0 <- 0.001
alpha_post <- alpha0 + case$y[1]
beta_post <- beta0 + 1
estimate <- alpha_post / beta_post
bayes_lower <- qgamma(0.025, shape = alpha_post, rate = beta_post)
bayes_upper <- qgamma(0.975, shape = alpha_post, rate = beta_post)
bayes_width <- bayes_upper - bayes_lower
# Apply N-adjustment
adj <- 1 / (1 + N_fixed * 0.001)
adj_width <- bayes_width * adj
lower <- estimate - adj_width / 2
upper <- estimate + adj_width / 2
width <- adj_width
note <- paste0("(N-adjusted, N=", N_fixed, ")")
} else {
# Empirical Bayes (same as app)
result <- calc_mom_posterior(case$y)
estimate <- result$estimate
lower <- result$lower
upper <- result$upper
width <- result$width
note <- ""
}
data.frame(
Occurrences = case$desc,
TOT_values = paste0("[", paste(case$y, collapse = ", "), "]"),
Estimate = round(estimate, 2),
CI_95 = paste0("[", round(lower, 2), ", ", round(upper, 2), "]"),
CI_Width = paste0(round(width, 2), " ", note)
)
})
occ_table <- do.call(rbind, occ_results)
kable(occ_table, col.names = c("Occurrences", "TOT values", "Estimate", "95% CI", "CI Width")) %>%
kable_styling(bootstrap_options = c("striped", "hover"))
| Occurrences | TOT values | Estimate | 95% CI | CI Width |
|---|---|---|---|---|
| n = 1 | [4] | 4.01 | [0.46, 7.55] | 7.1 (N-adjusted, N=82) |
| n = 2 | [4, 2] | 3.00 | [1.47, 5.07] | 3.6 |
| n = 3 | [4, 2, 3] | 3.00 | [1.78, 4.54] | 2.76 |
| n = 5 | [4, 2, 3, 3, 5] | 3.40 | [2.22, 4.83] | 2.61 |
| n = 8 | [4, 2, 3, 3, 5, 1, 4, 2] | 3.00 | [2.01, 4.18] | 2.17 |
Note: CI width depends on observed variance, not just n. The table may show widening as n increases if more varied TOT values are observed. This correctly reflects increased uncertainty when the item appears in graves with different wealth levels.
The table below shows an example of the input data format for the app. Each row is a grave, columns are grave good types (1 = present, 0 = absent), and TOT is calculated as the total number of different object types per grave.
| Grave_ID | amphorae | beakers | jugs | pots | beaker_handled | beaker_cylindrical | bowls | jars_smallbowl | cups_handled | other_pottypes | sherds | animal_species | spindleWhorl | bone_bead | boars_tusk | tooth_ornament | tooth_imitation | bone_tool_simple | bone_tool_complex | copperAwlNeedle | copperKnifeRazor | copper_ornament | copperNeckRing | GoldHairRing | flintAxe | flint_other | flint_arrowhead | shell_disc | perf_shell | macehead | battle_axe | stone_hammer | stone_flat_axe | whetstone | mineral_stone_other | ochre | TOT |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Kru_143.1.4_gr4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Kru_143.1.8_gr8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Tva_321.3.2_gr4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Hol_93.2.3_gr3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Hol_93.2.4_gr4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Iv4_803 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Klo_121.1.1_gr1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
| Pru_257.1.1_gr1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
| BrS_30.1.1_gr9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
| Hol_93.1.1_gr2 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
| Hol_93.1.3_gr36 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
| Iv3_2_804 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
| Iv4_807A | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
| Iv4_807B | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
| Hra_100.3.1_gr1 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
| Kru_143.1.3_gr3 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
| Brn_24.2.1_gr1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
| Pro_246.1.1_gr1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
| BrC_25.4.2_gr1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
| Sud_294.1.1_gr5 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
| Tva_321.3.1_gr3 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
| Cel_56.1.1_gr1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
| Hos_839 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
| Iv3_2_805 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
| KuH_132.1.7.2_br2 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
| Mos_198.1.1_gr1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
| Pod_240.1.1_br1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
| Stra_291.1.1_gr1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
| BrS_30.1.3_gr42 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
| BrS_30.1.4_gr70 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
| Buc_35.1.1_gr1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
| Buc_35.1.2_gr2 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
| Dra_72.1.1_gr1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
| Iv3_2_806 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
| Iv3_2_809 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
| Iv3_2_811 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
| Niv_104.1.1.1_gr1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
| Lut_173.1.4_gr4 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
| Mar_177.4.5_gr5 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
| Bol_20.1.1_gr1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
| Nec_207.1.1_gr1 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
| Nec_207.1.6_gr18 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
| Sla_273.1.1_gr1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 4 |
| Vic_347.1.1_gr1 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
| Vic_347.1.5_gr5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 4 |
| Dra_72.1.2_gr2 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 4 |
| Hol_93.2.2_gr2 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
| Iv3_2_825 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 4 |
| Blu_14.1.1_gr1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 5 |
| Let_155.1.3_br5 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
| Mou_199.1.2_gr2 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
| Nec_207.1.3_gr11 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
| Pro_246.3.1_gr1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
| Siv_271.1.1_gr1A | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
| Tov_314.1.2_gr2 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
| Vic_347.1.3_gr3 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
| Hol_93.2.1_gr1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
| Iv3_2_803 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
| KyN_148.2.1_gr1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 6 |
| Mor_197.1.1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
| Slav_277.1.1_gr1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
| Smr_281.1.1_gr1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
| Vre_355.1.2_gr2 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 6 |
| Zel_368.1.1_gr1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 6 |
| Hol_93.1.2_gr26 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
| Kru_143.1.7_gr7 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
| Let_155.1.1_br3 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
| Let_155.1.2_br4 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 7 |
| BrS_30.1.2_gr36 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 7 |
| Tes_312.1.1_gr1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
| Det_58.1.1_gr1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
| Hos_801 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
| Hos_802 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 8 |
| Hos_838 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
| Iv3_2_801 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
| Pav_232.1.2_gr14 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
| Hos_97.1.1_gr1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 10 |
| Iv4_800 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 10 |
| Mar_177.1.1_gr1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
| Pav_232.1.1_gr5 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 11 |
| Iv4_810 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 11 |
| Let_155.1.4_br6 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 16 |
Key patterns visible in the data:
The prestige value for each item type is estimated from the mean TOT of graves containing that item.
# Calculate results for each item (items list already defined in setup)
plot_data <- lapply(items, function(item) {
n <- length(item$y)
obs_min <- min(item$y)
obs_max <- max(item$y)
if (n == 1) {
# Weak prior + N-adjustment
alpha0 <- 0.01
beta0 <- 0.001
alpha_post <- alpha0 + item$y[1]
beta_post <- beta0 + 1
estimate <- alpha_post / beta_post
bayes_lower <- qgamma(0.025, shape = alpha_post, rate = beta_post)
bayes_upper <- qgamma(0.975, shape = alpha_post, rate = beta_post)
adj <- 1 / (1 + N_example * 0.001)
adj_width <- (bayes_upper - bayes_lower) * adj
final_lower <- estimate - adj_width / 2
final_upper <- estimate + adj_width / 2
} else {
result <- calc_mom_posterior(item$y)
estimate <- result$estimate
bayes_lower <- result$lower
bayes_upper <- result$upper
final_lower <- bayes_lower
final_upper <- bayes_upper
}
data.frame(
name = item$name,
n = n,
estimate = estimate,
obs_min = obs_min,
obs_max = obs_max,
bayes_lower = bayes_lower,
bayes_upper = bayes_upper,
final_lower = final_lower,
final_upper = final_upper
)
})
plot_df <- do.call(rbind, plot_data)
# Create label with sample size: "Type (n=X)"
plot_df$label <- paste0(plot_df$name, " (n=", plot_df$n, ")")
# Order by estimate (lowest prestige first, like app boxplot)
plot_df$label <- factor(plot_df$label, levels = plot_df$label[order(plot_df$estimate)])
# Create plot with vertical orientation (types on x-axis, TOT on y-axis)
# Gray = observed range (ALL items)
# Blue = Empirical Bayes CI (n>=2)
# Black = Weak prior Bayesian CI (n=1)
# Red = N-adjusted CI (n=1)
ggplot(plot_df, aes(x = label)) +
# Gray: Observed range (all items)
geom_errorbar(aes(ymin = obs_min, ymax = obs_max),
width = 0.3, color = "gray60", linewidth = 1.5, alpha = 0.5) +
# For n>=2: Blue CI (Empirical Bayes)
geom_errorbar(data = subset(plot_df, n > 1),
aes(ymin = final_lower, ymax = final_upper),
width = 0.2, color = "#3498db", linewidth = 1.2) +
geom_point(data = subset(plot_df, n > 1),
aes(y = estimate), color = "#3498db", size = 3) +
# For n=1: Black (weak prior Bayesian CI)
geom_errorbar(data = subset(plot_df, n == 1),
aes(ymin = bayes_lower, ymax = bayes_upper),
width = 0.2, color = "black", linewidth = 1,
position = position_nudge(x = -0.15)) +
geom_point(data = subset(plot_df, n == 1),
aes(y = estimate), color = "black", size = 2,
position = position_nudge(x = -0.15)) +
# For n=1: Red (N-adjusted CI)
geom_errorbar(data = subset(plot_df, n == 1),
aes(ymin = final_lower, ymax = final_upper),
width = 0.2, color = "#e74c3c", linewidth = 1.2,
position = position_nudge(x = 0.15)) +
geom_point(data = subset(plot_df, n == 1),
aes(y = estimate), color = "#e74c3c", size = 3,
position = position_nudge(x = 0.15)) +
labs(y = "TOT Value", x = "",
title = "Prestige Estimates with Confidence Intervals",
subtitle = paste("N =", N_example, "graves (Moravia)")) +
theme_minimal() +
theme(legend.position = "none",
axis.text.x = element_text(angle = 60, hjust = 1, vjust = 1, size = 8))
The same analysis using the larger dataset (N = 419 graves). Notice how the N-adjusted CI (red) for the spindleWhorl is narrower compared to the smaller dataset above.
# Calculate results for each item from the LARGE dataset
plot_data_large <- lapply(items_large, function(item) {
n <- length(item$y)
obs_min <- min(item$y)
obs_max <- max(item$y)
if (n == 1) {
# Weak prior + N-adjustment (using N_large)
alpha0 <- 0.01
beta0 <- 0.001
alpha_post <- alpha0 + item$y[1]
beta_post <- beta0 + 1
estimate <- alpha_post / beta_post
bayes_lower <- qgamma(0.025, shape = alpha_post, rate = beta_post)
bayes_upper <- qgamma(0.975, shape = alpha_post, rate = beta_post)
adj <- 1 / (1 + N_large * 0.001)
adj_width <- (bayes_upper - bayes_lower) * adj
final_lower <- estimate - adj_width / 2
final_upper <- estimate + adj_width / 2
} else {
result <- calc_mom_posterior(item$y)
estimate <- result$estimate
bayes_lower <- result$lower
bayes_upper <- result$upper
final_lower <- bayes_lower
final_upper <- bayes_upper
}
data.frame(
name = item$name,
n = n,
estimate = estimate,
obs_min = obs_min,
obs_max = obs_max,
bayes_lower = bayes_lower,
bayes_upper = bayes_upper,
final_lower = final_lower,
final_upper = final_upper
)
})
plot_df_large <- do.call(rbind, plot_data_large)
# Create label with sample size
plot_df_large$label <- paste0(plot_df_large$name, " (n=", plot_df_large$n, ")")
# Order by estimate
plot_df_large$label <- factor(plot_df_large$label, levels = plot_df_large$label[order(plot_df_large$estimate)])
ggplot(plot_df_large, aes(x = label)) +
# Gray: Observed range (all items)
geom_errorbar(aes(ymin = obs_min, ymax = obs_max),
width = 0.3, color = "gray60", linewidth = 1.5, alpha = 0.5) +
# For n>=2: Blue CI (Empirical Bayes)
geom_errorbar(data = subset(plot_df_large, n > 1),
aes(ymin = final_lower, ymax = final_upper),
width = 0.2, color = "#3498db", linewidth = 1.2) +
geom_point(data = subset(plot_df_large, n > 1),
aes(y = estimate), color = "#3498db", size = 3) +
# For n=1: Black (weak prior Bayesian CI)
geom_errorbar(data = subset(plot_df_large, n == 1),
aes(ymin = bayes_lower, ymax = bayes_upper),
width = 0.2, color = "black", linewidth = 1,
position = position_nudge(x = -0.15)) +
geom_point(data = subset(plot_df_large, n == 1),
aes(y = estimate), color = "black", size = 2,
position = position_nudge(x = -0.15)) +
# For n=1: Red (N-adjusted CI)
geom_errorbar(data = subset(plot_df_large, n == 1),
aes(ymin = final_lower, ymax = final_upper),
width = 0.2, color = "#e74c3c", linewidth = 1.2,
position = position_nudge(x = 0.15)) +
geom_point(data = subset(plot_df_large, n == 1),
aes(y = estimate), color = "#e74c3c", size = 3,
position = position_nudge(x = 0.15)) +
labs(y = "TOT Value", x = "",
title = "Prestige Estimates with Confidence Intervals",
subtitle = paste("N =", N_large, "graves (Bohemia + Moravia)")) +
theme_minimal() +
theme(legend.position = "none",
axis.text.x = element_text(angle = 60, hjust = 1, vjust = 1, size = 8))
Key observation: The spindleWhorl has the same Bayesian CI (black bar) in both plots because it’s based only on the observed TOT value. However, the N-adjusted CI (red bar) is 23.7% narrower in the larger dataset (N = 419) compared to the smaller one (N = 82).
Reading the plot:
Example interpretation:
jugs (n = 49):
spindleWhorl (n = 1):
# Recalculate with range info for all items
results_data <- lapply(items, function(item) {
n <- length(item$y)
obs_range <- paste0("[", min(item$y), ", ", max(item$y), "]")
if (n == 1) {
alpha0 <- 0.01
beta0 <- 0.001
alpha_post <- alpha0 + item$y[1]
beta_post <- beta0 + 1
estimate <- alpha_post / beta_post
bayes_lower <- qgamma(0.025, shape = alpha_post, rate = beta_post)
bayes_upper <- qgamma(0.975, shape = alpha_post, rate = beta_post)
adj <- 1 / (1 + N_example * 0.001)
adj_width <- (bayes_upper - bayes_lower) * adj
final_lower <- estimate - adj_width / 2
final_upper <- estimate + adj_width / 2
bayes_ci_str <- paste0("[", round(bayes_lower, 2), ", ", round(bayes_upper, 2), "]")
n_adj <- paste0(round((1 - adj) * 100, 1), "%")
method <- "Weak prior + N-adj"
} else {
result <- calc_mom_posterior(item$y)
estimate <- result$estimate
final_lower <- result$lower
final_upper <- result$upper
bayes_ci_str <- "-"
n_adj <- "-"
method <- "Empirical Bayes"
}
data.frame(
Item = item$name,
n = n,
Range = obs_range,
Estimate = round(estimate, 2),
Bayes_CI = bayes_ci_str,
Final_CI = paste0("[", round(final_lower, 2), ", ", round(final_upper, 2), "]"),
N_adjustment = n_adj,
Method = method
)
})
results_table <- do.call(rbind, results_data)
kable(results_table, col.names = c("Item", "n", "TOT Range", "Estimate", "Weak Prior CI", "Final CI", "N-adj", "Method")) %>%
kable_styling(bootstrap_options = c("striped", "hover")) %>%
row_spec(which(results_table$n == 1), background = "#fef9e7")
| Item | n | TOT Range | Estimate | Weak Prior CI | Final CI | N-adj | Method | |
|---|---|---|---|---|---|---|---|---|
| amphorae | amphorae | 32 | [1, 16] | 6.38 |
|
[5.54, 7.27] |
|
Empirical Bayes |
| beakers | beakers | 16 | [1, 11] | 4.50 |
|
[3.54, 5.57] |
|
Empirical Bayes |
| jugs | jugs | 49 | [1, 16] | 5.61 |
|
[4.97, 6.29] |
|
Empirical Bayes |
| pots | pots | 36 | [2, 16] | 6.22 |
|
[5.44, 7.05] |
|
Empirical Bayes |
| beaker_handled | beaker_handled | 2 | [4, 16] | 10.00 |
|
[6.22, 14.66] |
|
Empirical Bayes |
| bowls | bowls | 28 | [1, 16] | 5.46 |
|
[4.64, 6.35] |
|
Empirical Bayes |
| jars_smallbowl | jars_smallbowl | 3 | [3, 6] | 4.33 |
|
[2.68, 6.37] |
|
Empirical Bayes |
| cups_handled | cups_handled | 8 | [5, 16] | 8.50 |
|
[6.67, 10.54] |
|
Empirical Bayes |
| other_pottypes | other_pottypes | 13 | [2, 16] | 6.00 |
|
[4.76, 7.38] |
|
Empirical Bayes |
| sherds | sherds | 17 | [1, 10] | 4.76 |
|
[3.81, 5.83] |
|
Empirical Bayes |
| animal_species | animal_species | 18 | [1, 11] | 5.89 |
|
[4.84, 7.04] |
|
Empirical Bayes |
| spindleWhorl | spindleWhorl | 1 | [4, 4] | 4.01 | [1.09, 8.77] | [0.46, 7.55] | 7.6% | Weak prior + N-adj |
| bone_bead | bone_bead | 2 | [11, 11] | 11.00 |
|
[10.39, 11.62] |
|
Empirical Bayes |
| boars_tusk | boars_tusk | 3 | [7, 16] | 10.00 |
|
[6.91, 13.65] |
|
Empirical Bayes |
| tooth_ornament | tooth_ornament | 1 | [11, 11] | 11.00 | [5.49, 18.38] | [5.04, 16.96] | 7.6% | Weak prior + N-adj |
| tooth_imitation | tooth_imitation | 1 | [11, 11] | 11.00 | [5.49, 18.38] | [5.04, 16.96] | 7.6% | Weak prior + N-adj |
| bone_tool_simple | bone_tool_simple | 22 | [2, 11] | 7.00 |
|
[5.96, 8.12] |
|
Empirical Bayes |
| bone_tool_complex | bone_tool_complex | 3 | [7, 11] | 9.00 |
|
[6.62, 11.74] |
|
Empirical Bayes |
| copperAwlNeedle | copperAwlNeedle | 5 | [7, 16] | 10.20 |
|
[7.76, 12.97] |
|
Empirical Bayes |
| copperKnifeRazor | copperKnifeRazor | 4 | [6, 16] | 10.00 |
|
[7.29, 13.13] |
|
Empirical Bayes |
| copper_ornament | copper_ornament | 15 | [2, 16] | 7.53 |
|
[6.23, 8.96] |
|
Empirical Bayes |
| copperNeckRing | copperNeckRing | 1 | [8, 8] | 8.00 | [3.46, 14.42] | [2.94, 13.07] | 7.6% | Weak prior + N-adj |
| GoldHairRing | GoldHairRing | 1 | [16, 16] | 15.99 | [9.14, 24.73] | [8.79, 23.2] | 7.6% | Weak prior + N-adj |
| flintAxe | flintAxe | 4 | [4, 9] | 6.00 |
|
[4.05, 8.33] |
|
Empirical Bayes |
| flint_other | flint_other | 30 | [3, 16] | 6.53 |
|
[5.66, 7.47] |
|
Empirical Bayes |
| perf_shell | perf_shell | 4 | [3, 11] | 7.75 |
|
[5.39, 10.53] |
|
Empirical Bayes |
| battle_axe | battle_axe | 15 | [2, 16] | 5.87 |
|
[4.72, 7.13] |
|
Empirical Bayes |
| stone_flat_axe | stone_flat_axe | 10 | [5, 11] | 7.70 |
|
[6.18, 9.38] |
|
Empirical Bayes |
| whetstone | whetstone | 3 | [7, 16] | 11.00 |
|
[7.81, 14.72] |
|
Empirical Bayes |
| mineral_stone_other | mineral_stone_other | 7 | [1, 16] | 7.00 |
|
[5.21, 9.05] |
|
Empirical Bayes |
| ochre | ochre | 1 | [6, 6] | 6.00 | [2.21, 11.67] | [1.63, 10.38] | 7.6% | Weak prior + N-adj |
Yellow row = unique item (n=1) with weak prior and N-based CI adjustment
Reading the table:
1/(1 + N*0.001)Principle: Use N to compensate for lack of variance information (n=1 only). Once we have multiple observations, let the observed variance determine confidence.
The 0.001 constant: Interpretable as “N=1000 gives 50% CI narrowing.” This provides conservative, linear scaling with sample size.
method_table <- data.frame(
Condition = c("n = 1", "n >= 2"),
Prior = c("Weak (alpha0=0.01, beta0=0.001)", "Empirical Bayes (method of moments)"),
CI_Method = c("N-adjusted: 1/(1 + N*0.001)", "Pure Bayesian CI"),
Rationale = c("No variance info; N indicates search effort",
"Variance-based prior handles confidence")
)
kable(method_table, col.names = c("Condition", "Prior", "CI Method", "Rationale")) %>%
kable_styling(bootstrap_options = c("striped", "hover"))
| Condition | Prior | CI Method | Rationale |
|---|---|---|---|
| n = 1 | Weak (alpha0=0.01, beta0=0.001) | N-adjusted: 1/(1 + N*0.001) | No variance info; N indicates search effort |
| n >= 2 | Empirical Bayes (method of moments) | Pure Bayesian CI | Variance-based prior handles confidence |
Document generated: 2025-12-22