Federal Agencies
Federal Executive Branch
Here's a look at documents from the U.S. Executive Branch
Featured Stories
White House Fact Sheet: Establishing Policies That Drive Accountable Hiring
WASHINGTON, July 8 -- The White House issued the following fact sheet on July 7, 2025:* * *
President Donald J. Trump Ensures Accountability and Prioritizes Public Safety in Federal Hiring
ESTABLISHING POLICIES THAT DRIVE ACCOUNTABLE HIRING: Today, President Donald J. Trump signed a Presidential Memorandum that requires many federal hires to be approved by an agency's presidentially appointed leadership, to end incompetence and "equity" over results.
* This prohibits filling vacant federal civilian positions or creating new ones without approval from agency leadership, with certain exceptions. ... Show Full Article WASHINGTON, July 8 -- The White House issued the following fact sheet on July 7, 2025: * * * President Donald J. Trump Ensures Accountability and Prioritizes Public Safety in Federal Hiring ESTABLISHING POLICIES THAT DRIVE ACCOUNTABLE HIRING: Today, President Donald J. Trump signed a Presidential Memorandum that requires many federal hires to be approved by an agency's presidentially appointed leadership, to end incompetence and "equity" over results. * This prohibits filling vacant federal civilian positions or creating new ones without approval from agency leadership, with certain exceptions.
- Exemptions from the policy for immigration enforcement, national security, and public safety positions shall remain, which apply to roles like Department of Veterans Affairs medical personnel, food safety inspectors, firefighters, air traffic controllers, and National Weather Service employees.
* This Memorandum provides that the policy applies through October 15, 2025.
- The Memorandum allows hiring that is directly approved by senior agency leadership appointed by the President.
- This ensures democratic accountability, rather than hiring being driven by the bureaucracy, and that hiring decisions are based on agency priorities.
* It also clarifies that any hiring of employees be consistent with the Merit Hiring Plan issued by the Administration on May 29, 2025.
PROMOTING FISCAL RESPONSIBILITY AND GOVERNMENT EFFICIENCY: President Trump is strengthening accountable hiring practices to ensure taxpayer dollars are used efficiently.
* In the last two years of the Biden Administration, government was directly responsible for the creation of more than 1 in every 4 jobs.
* President Trump is committed to reversing this trend by prioritizing private-sector job growth and maintaining oversight of hiring by presidentially appointed leadership.
- This ensures the Federal workforce remains focused on essential functions and fully aligned with administration priorities.
REFORMING THE FEDERAL BUREAUCRACY: The American people elected President Trump to drain the swamp and end ineffective government programs that empower government without achieving measurable results.
* The government wastes billions of dollars each year on duplicative programs and frivolous expenditures that fail to align with American values or address the needs of the American people.
* The Trump Administration is committed to streamlining the Federal Government, eliminating unnecessary programs, and reducing bureaucratic inefficiency.
* President Trump launched a 10-to-1 deregulation initiative, ensuring every new rule is justified by clear benefits.
* President Trump authorized buyout programs to encourage federal employees to leave voluntarily.
* Through these actions, President Trump is keeping his promise to restore efficiency and accountability in the Federal Government.
* * *
Original text here: https://www.whitehouse.gov/fact-sheets/2025/07/fact-sheet-president-donald-j-trump-ensures-accountability-and-prioritizes-public-safety-in-federal-hiring/
USDA to Forecast Grape Production
WASHINGTON, July 8 -- The U.S. Department of Agriculture's National Agricultural Statistics Service issued the following news release:* * *
USDA to forecast grape production
*
WASHINGTON, July 7, 2025 - Starting at the end of July, the U.S. Department of Agriculture's (USDA) National Agricultural Statistics Service (NASS) will mail the Grape Inquiry - August 2025 survey to approximately 2,000 U.S. growers. The survey asks for grape acreage and projected production. NASS will forecast 2025 grape production based on the information collected.
"The information from this survey directly impacts ... Show Full Article WASHINGTON, July 8 -- The U.S. Department of Agriculture's National Agricultural Statistics Service issued the following news release: * * * USDA to forecast grape production * WASHINGTON, July 7, 2025 - Starting at the end of July, the U.S. Department of Agriculture's (USDA) National Agricultural Statistics Service (NASS) will mail the Grape Inquiry - August 2025 survey to approximately 2,000 U.S. growers. The survey asks for grape acreage and projected production. NASS will forecast 2025 grape production based on the information collected. "The information from this survey directly impactsU.S. grape growers," said USDA NASS Administrator Joseph L. Parsons. "Growers can use the forecast data when making business plans and marketing decisions. The data can also inform programs and projects provided by agencies, Cooperative Extension, state and local governments, and other industry groups in service to our nation's growers."
Growers can respond to the survey securely online at agcounts.usda.gov, by mail, or fax. The information provided is protected and confidential in accordance with federal law (Title V, Subtitle A, Public Law 107-347). For assistance with the survey, please call 888-424-7828.
The 2025 U.S. grape forecast will be released at noon ET, Aug. 12, 2025, in the Crop ProductionCrop Production report. All NASS reports are available online at nass.usda.gov.
Mark your calendar for Aug. 12, 2025, at 1:30 p.m. ET for a live Stat Chat following the forecast release. Join #NASS Agricultural Statistics Board Chair Lance Honig @usda_nass on X and use #StatChat when posting your question.
NASS is the federal statistical agency responsible for producing official data about U.S. agriculture and is committed to providing timely, accurate and useful statistics in service to U.S. agriculture.
USDA is an equal opportunity provider, employer and lender. To file a complaint of discrimination, write to USDA, Assistant Secretary for Civil Rights, Office of the Assistant Secretary for Civil Rights, 1400 Independence Avenue, S.W., Stop 9410, Washington, DC 20250-9410, or call toll-free at (866) 632-9992 (English) or (800) 877-8339 (TDD) or (866) 377-8642 (English Federal-relay) or (800) 845-6136 (Spanish Federal-relay).
* * *
Original text here: https://www.nass.usda.gov/Newsroom/2025/07-07-2025.php
FCC: ENFORCEMENT BUREAU REMINDS MVPDS OF 2025 FCC FORM 396-C DEADLINE
WASHINGTON, July 8 -- The Federal Communications Commission's Enforcement Bureau issued the following order (File No.: DA 25-568):* * *
ENFORCEMENT BUREAU REMINDS MVPDS OF 2025 FCC FORM 396-C DEADLINE
*
Pursuant to section 76.77 of the Commission's rules, 47 CFR Sec. 76.77, a multichannel video program distributor (MVPD) employment unit with six or more full-time employees must file an FCC Form 396-C, Multichannel Video Programming Distributor EEO Program Annual Report, by September 30 each year. By this Notice, we remind MVPDs of this recurring obligation. The Form 396-C must be submitted ... Show Full Article WASHINGTON, July 8 -- The Federal Communications Commission's Enforcement Bureau issued the following order (File No.: DA 25-568): * * * ENFORCEMENT BUREAU REMINDS MVPDS OF 2025 FCC FORM 396-C DEADLINE * Pursuant to section 76.77 of the Commission's rules, 47 CFR Sec. 76.77, a multichannel video program distributor (MVPD) employment unit with six or more full-time employees must file an FCC Form 396-C, Multichannel Video Programming Distributor EEO Program Annual Report, by September 30 each year. By this Notice, we remind MVPDs of this recurring obligation. The Form 396-C must be submittedvia the EEO filing portal in the Cable Operations and Licensing System (COALS), which can be accessed at https://fccprod.servicenowservices.com/coals. /1 Additionally, we identify in the following pages those employment units that must complete the Supplemental Investigation Sheet (SIS) of the Form 396-C this year. Moreover, Section I of the COALS Form 396-C, labeled "Supplemental Investigation Sheet," automatically displays a check mark for those filers that are required to submit the SIS. SIS filers should also take note of the following requirements:
* Part I: One job description must be provided for this category: Professionals
* Part II: Only questions 3, 5, and 8 must be answered.
* Part III: The employment unit's 2025EEO Public File Report/2 covering the previous 12 months (2024-2025), must be attached.
Questions concerning the Form 396-C can be directed to EEO staff at EB-EEO@fcc.gov or (202) 418-1450. For technical assistance with COALS, please contact coals_help@fcc.gov and for help with CORES matters, direct questions to COREShelp@fcc.gov
FCC NOTICE REQUIRED BY THE PAPERWORK REDUCTION ACT
We have estimated that each response to this collection of information will take from 0.166 to 2.5 hours. Our estimate includes the time to read the instructions, look through existing records, gather and maintain the required data, and actually complete and review the form or response. If you have any comments on this burden estimate, or on how we can improve the collection and reduce the burden it causes you, please e-mail them to pra@fcc.gov or send them to the Federal Communications Commission, AMD-PERM, Paperwork Reduction Project (3060-1033), Washington, DC 20554. Please DO NOT SEND COMPLETED APPLICATIONS TO THIS ADDRESS. Remember - you are not required to respond to a collection of information sponsored by the Federal government, and the government may not conduct or sponsor this collection, unless it displays a currently valid OMB control number or if we fail to provide you with this notice. This collection has been assigned an OMB control number of 3060-1033.
THE FOREGOING NOTICE IS REQUIRED BY THE PAPERWORK REDUCTION ACT OF 1995, P.L. 104-13, OCTOBER 1, 1995, 44 U.S.C. 3507.
1/ Please note that COALS requires a Commission Registration System (CORES) username and password to which authority for one or more Federal Registration Numbers (FRNs) has been delegated. To manage CORES usernames and passwords, please visit https://apps.fcc.gov/cores/userLogin.do. For additional information regarding FRNs and the CORES system, please visit https://www.fcc.gov/licensing-databases/commission-registration-system-fcc.
2/ See 47 CFR Sec. 76.1702(b) of the Commission's rules for information regarding the annual EEO Public File Report.
* * *
Original text here: https://docs.fcc.gov/public/attachments/DA-25-568A1.pdf
ERDC and NATO Experiment Advances Engineer Survivability
FRECATEI, Romania, July 8 -- The U.S. Army Engineer Research and Development Center issued the following news story:* * *
ERDC and NATO experiment advances engineer survivability
By Kaley Skaggs, public affairs specialist
The U.S. Army Engineer Research and Development Center (ERDC), along with the NATO Military Engineering Working Group's Camouflage, Concealment, Deception, and Obscuration Team of Experts, recently conducted an experiment in support of "surviving the gap" during wet gap crossing exercises in Frecatei, Romania.
From June 6-13, 2025, the experiment was conducted alongside ... Show Full Article FRECATEI, Romania, July 8 -- The U.S. Army Engineer Research and Development Center issued the following news story: * * * ERDC and NATO experiment advances engineer survivability By Kaley Skaggs, public affairs specialist The U.S. Army Engineer Research and Development Center (ERDC), along with the NATO Military Engineering Working Group's Camouflage, Concealment, Deception, and Obscuration Team of Experts, recently conducted an experiment in support of "surviving the gap" during wet gap crossing exercises in Frecatei, Romania. From June 6-13, 2025, the experiment was conducted alongsideand in coordination with the 7th Engineer Brigade and V Corps and demonstrated signature management methods and other effects to enhance the survivability of combat engineers and critical mobility and logistics assets. Finding new and improved ways to reduce risks to troops and equipment is especially important during wet gap crossings and similar maneuvers where forces tend to be more vulnerable as they bridge and cross contested water obstacles.
ERDC's team of subject-matter experts were in Romania for Saber Guardian 25 to employ new technologies and better understand how they work in operational environments. Saber Guardian 25 is an exercise co-led by V Corps, Hungarian Defense Forces and Romanian Land Forces. The exercise took place at various locations in Bulgaria, Hungary and Romania.
"This exercise provided invaluable Warfighter touchpoints that directly inform ongoing ERDC research efforts in survivability and protection of critical force projection capabilities," said Carey Price, a senior research engineer with ERDC's Geotechnical and Structures Laboratory.
The information learned will feed future experiments and programs focused on keeping Soldiers safe during large-scale combat exercises and real-world operations.
In close coordination with U.S. Army Europe and Africa, U.S. European Command and other U.S. units in theater, ERDC regularly works hand-in-hand with NATO Allies and other U.S. partners to develop and provide innovative solutions to military engineering challenges.
"The opportunity to collaborate with allies not only enhances interoperability between partner nations, but allows the U.S. research enterprise to leverage diverse skillsets and capabilities that are not always available domestically," said Price.
The NATO Military Engineering Working Group and the Camouflage, Concealment, Deception, and Obscuration Team of Experts were also strong partners for collaborating internationally, integrating allies into joint exercises, and including their technologies in this experiment.
ERDC has worked closely with the engineer community for years to improve Army engineers' abilities to safely conduct wet gap crossings. The organization's military engineering portfolio conducts extensive work in both force protection and force projection. This exercise allows both of those fields to collaborate in the area of protected maneuver.
"This work is crucial to ensuring the survivability of the limited bridging assets and personnel available worldwide," said Price. "Recent conflicts have highlighted the vulnerability of this mission set, and at ERDC, we are working to provide viable protective countermeasures."
Experiments like this directly contribute to improving tactics, techniques and procedures for Warfighters, increases readiness, effectiveness and lethality of units engaging and defeating America's adversaries.
"ERDC will continue pursuing unconventional protection methods and delivering those capabilities to our engineer units, because we know first-hand that they're counting on it," said Price.
* * *
Original text here: https://www.erdc.usace.army.mil/Media/News-Stories/Article/4235828/erdc-and-nato-experiment-advances-engineer-survivability/
Census Bureau Issues Working Paper Entitled 'Investments Under Risk - Evidence From Hurricane Strikes'
WASHINGTON, July 8 (TNSLrpt) -- The U.S. Census Bureau issued the following working paper (No. CES 25-43) on June 2025 entitled "Investments under Risk: Evidence from Hurricane Strikes."The paper was written by Rajesh Aggarwal and Mufaddal Baxamusa.
Here are excerpts:
* * *
Introduction
Hurricane strikes can lead to significant damage. For instance, Hurricane Katrina, one of the most catastrophic hurricanes in U.S. history, caused an estimated $182.5 billion in damages (adjusted to 2021 dollars). While the immediate impacts of such events are severe, their longterm effects on the economy ... Show Full Article WASHINGTON, July 8 (TNSLrpt) -- The U.S. Census Bureau issued the following working paper (No. CES 25-43) on June 2025 entitled "Investments under Risk: Evidence from Hurricane Strikes." The paper was written by Rajesh Aggarwal and Mufaddal Baxamusa. Here are excerpts: * * * Introduction Hurricane strikes can lead to significant damage. For instance, Hurricane Katrina, one of the most catastrophic hurricanes in U.S. history, caused an estimated $182.5 billion in damages (adjusted to 2021 dollars). While the immediate impacts of such events are severe, their longterm effects on the economyare unclear. Following a hurricane, affected areas may experience a period of rebuilding, which could spur economic recovery. Conversely, the heightened awareness of future risks and the influence of climate change may also prompt individuals and companies to relocate, possibly leading to sustained economic losses. For example, New Orleans only regained its pre-Katrina population by 2023.
This paper examines the responses of companies' capital investment decisions to hurricane strikes. Companies may increase capital investment in the aftermath of a hurricane to rebuild or repair damage (Pelli et al., 2023, Olshansky, 2018) or capitalize on government incentives (see Fu and Gregory, 2019) and tax relief (Stead, 2006). Alternatively, the increase in hurricane risks and climate change concerns may encourage companies to shift their operations to less hurricane-prone regions, resulting in a reduction of capital expenditures in affected areas.
We hypothesize that hurricane strikes are not independent and identically distributed events as climate change suggests that the hurricane-hit areas have an increased probability of being hit by another hurricane. We propose that the increased frequency of hurricanes - potentially exacerbated by climate change - raises the perceived long-term risk of being located in hurricane-prone areas. As a result, companies may respond by reducing investments in the regions impacted by hurricanes.
Similar to Dessaint and Matray (2017) we use a natural experiment, examining hurricanes from 1989 to 2017 that caused economic losses exceeding $5 billion, adjusting for inflation. By analyzing establishment-level data from the Census Bureau, we investigate how capital expenditures vary before and after the hurricane strike across plants located in affected versus unaffected regions.
Our findings suggest that companies in hurricane-impacted areas tend to decrease their capital expenditures post-strike, reflecting a shift in investment away from these higher-risk locations. In contrast, plants in non-hurricane-impacted regions, particularly those owned by firms with plants in affected areas, tend to increase their capital investments. Capital expenditure can be broadly categorized into two types: building capital expenditure and machinery capital expenditure. Examining building capital expenditures shows a clear decline in building-related investments in regions impacted by hurricanes. In contrast, plants of the same firms in regions unaffected by hurricanes show an increase in building capital expenditures, suggesting a potential redirection of investments from hurricane-impacted areas to safer regions. Similar results are obtained with investments in machinery. This shift in capital investment behavior may reflect the growing perceived risk associated (see Engle et al, 2020; Braun et al., 2021; Huang et al., 2021) with operating in areas prone to recurrent hurricane strikes, prompting firms to allocate resources away from vulnerable locations.
The impact of hurricanes on business activity extends beyond capital expenditure decisions, influencing the survival and formation of new plants. The results suggest an increased probability of plant exits in hurricane-impacted areas, indicating that the aftermath of a hurricane may prompt businesses to close or relocate. In contrast, firms with plants in both hurricane and non-hurricane-affected areas are less likely to exit in non-hurricane-affected areas, suggesting a shift of economic activity from disaster-prone zones to more stable locations. Similarly, the likelihood of new plant formation in non-hurricane areas increases, especially for firms with plants in affected regions, further supporting the notion of investment redirection in response to heightened risk.
Next, we examine the effect of hurricanes on real output across different metropolitan statistical areas (MSAs). Prior research has shown that natural disasters can cause substantial economic damage, leading to significant reductions in output due to physical destruction and disrupted supply chains (e.g., Rose, 2004; Hallegatte & Przyluski, 2010). Our analysis reveals a significant decrease in real output in plants located in hurricane-impacted regions, as expected.
However, we also observe a notable increase in real output in non-hurricane-impacted areas, particularly for firms that operate plants in both affected and unaffected regions. This suggests that some firms may shift production to non-impacted areas.
Further investigation into the pre- and post-1997 periods suggests that the observed patterns are more pronounced in the post-1997 period, with hurricanes after 1997 having a more significant impact on real output. This shift in the magnitude of the effect could be attributed to increased awareness of climate risks and heightened resilience efforts in the wake of major global climate agreements, such as the Kyoto Protocol (MacCracken et al., 1999), which may have influenced businesses' risk perceptions.
Overall, our findings demonstrate that hurricanes lead to significant shifts in economic activity, with firms adjusting their output and capital expenditures in response to the disruptions caused by hurricane events. This paper contributes to the understanding of how climate change influences business decision-making and the reallocation of resources.
Section 2 describes our data. Section 3 describes our identification strategies. Section 4 presents our results. Section 5 concludes.
* * *
Conclusion
We explore how companies adjust their capital investment in response to hurricane strikes. Firms may either invest more in disaster-hit areas due to rebuilding efforts and government incentives or relocate and reduce investments due to heightened risk awareness and the increasing frequency of hurricanes linked to climate change. Intriguingly, we find evidence for the first hypothesis, rebuilding, during the early part of our sample up to 1997. From 1997 on, we find evidence for the second hypothesis, relocation. We conjecture that the Kyoto Protocol signed in December 1997 may have increased the salience of actual hurricane strikes for hurricane-affected firms.
For the sample as a whole, as well as the post-1997 sub-sample, we find that hurricanes generally lead to a decrease in capital expenditures in plants in affected areas, with a significant reduction in both building and machinery investments. By contrast, the same firms with other plants in non-hurricane hit areas experience increased investment, likely due to a shift in capital away from risk-prone areas. The likelihood of plant closures increases significantly in hurricane-affected areas, while new plant formation declines. Conversely, non-hurricane areas see an increase in new plant formation, suggesting a shift of economic activity to safer regions. Also, real output decreases significantly in hurricane-affected areas, while non-hurricane areas experience growth, indicating a redistribution of economic activity away from disaster-prone regions.
Our results have implications for the geography of firm production in response to increased climate risk. Perhaps not surprisingly, firms relocate from areas that are perceived to be riskier. The precise mechanisms for this relocation - is it due to labor relocation, greater insurance costs, or more general costly production - we leave to future research.
* * *
The paper is posted at: https://www2.census.gov/library/working-papers/2025/adrm/ces/CES-WP-25-43.pdf
Census Bureau Issues Working Paper Entitled 'Cognitive & Usability Testing Results of 2023 Census Test Instrument in English & Spanish'
WASHINGTON, July 8 (TNSLrpt) -- The U.S. Census Bureau issued the following working paper (No. rsm2025-10) on July 7, 2025, entitled "Cognitive and Usability Testing Results of the 2023 Census Test Instrument in English and Spanish."The paper was written by Marcus Berger, Erica Olmsted, Shelley Feuer, Crystal Hernandez, Alda Rivas and Elizabeth Nichols.
Here are excerpts:
* * *
TABLE OF CONTENTS
1. INTRODUCTION ... 1
2. RESEARCH QUESTIONS ... 1
3. METHODS ... 2 3.1 Session methodology ... 2
3.2 Participants ... 4
3.2.1 Table 1: Participant demographic information...
4. RESULTS ... ... Show Full Article WASHINGTON, July 8 (TNSLrpt) -- The U.S. Census Bureau issued the following working paper (No. rsm2025-10) on July 7, 2025, entitled "Cognitive and Usability Testing Results of the 2023 Census Test Instrument in English and Spanish." The paper was written by Marcus Berger, Erica Olmsted, Shelley Feuer, Crystal Hernandez, Alda Rivas and Elizabeth Nichols. Here are excerpts: * * * TABLE OF CONTENTS 1. INTRODUCTION ... 1 2. RESEARCH QUESTIONS ... 1 3. METHODS ... 2 3.1 Session methodology ... 2 3.2 Participants ... 4 3.2.1 Table 1: Participant demographic information... 4. RESULTS ...5
4.1 Usability findings ... 5
4.1.1 Address Screen ... 5
4.1.2 Whole/Partial Screen ... 7
4.1.3 Vacant Path ... 9
4.1.4 Other usability issues and observations ... 11
4.2 Satisfaction findings ... 18
Table 2: Average responses by language to satisfaction questions ....
4.3 Knowledge check ... 18
5. SUMMARY RECOMMENDATIONS ... 19
6. REFERENCES ... 20
Appendix A. Knowledge Check Questions ... 22
LIST OF TABLES
Table 1: Participant demographic information ... 4 Table 2: Average responses by language to satisfaction questions ... 18 LIST OF FIGURES
Figure 1. Flow of questions in the 2023 Census Test Pre-testing ... 3 Figure 2. 2020 Census Address screen with separate fields for address number and street name ... 5 Figure 3. 2023 Census Test Address screen with a combined field for address number and street name ... 6 Figure 4. Whole/Partial Question tested ... 7 Figure 5. Redesigned Whole/Partial Question used in the 2023 Census Test ... 9
Figure 6. Vignette used to test the vacancy path ... 9 Figure 7. Initial vacancy question used in the usability test ... 10 Figure 8. Follow up vacancy question if "Vacant Residence" is selected in Figure 7 ... 10 Figure 9. Initial vacancy question used in the 2023 Census Test ... 11 Figure 10. Roster screen with Add Person button and Jane Doe as the name already listed ... 12 Figure 11. Mockup of recommended changes to Roster screen ... 13 Figure 12. Owner/renter question in Spanish ... 14 Figure 13. Date of birth and age ... 15 Figure 14. Hispanic origin question ... 16 Figure 15. Race question ... 17
* * *
1. INTRODUCTION
The 2023 Census Test is the first iteration of the Small-Scale Response Testing (SmaRT) program that is part of the Census Bureau's Small-Scale Testing Initiative. SmaRT is our primary small-scale testing vehicle and how testing is being conducted this decade (complementing our two major census tests: the 2026 Census Test and the 2028 Dress Rehearsal) (United States Census Bureau 2025). The Center for Behavioral Science Methods (CBSM) at the Census Bureau carried out pretesting, including cognitive testing of the 2023 Census Test mailing materials (Feuer et al. 2023a), as well as usability testing of the online census questionnaire. This pretesting took place in the fall of 2022 with 17 participants in preparation for a field test in March 2023. Nine participants tested the mail materials and online questionnaire in English and eight in Spanish. Some interviews were conducted virtually, and others were conducted in person.
This report focuses on the usability testing of the online questionnaire for the 2023 Census Test. The online questionnaire included typical decennial census questions such as address collection, whether the home was owned or rented, names of individuals living at the address, the name of the householder, demographics for each household member, and relationship to the householder. The focus of the usability testing was on three key revisions to the 2020 Census online questionnaire.
1. The address collection fields were modified from the version used in the online 2020 Census questionnaire to separate the address number from the street name (in 2020 the address number and street name were collected in one field).
2. A new question to the questionnaire asking whether or not the respondent included everyone in the household on the roster, referred to as the Whole/Partial roster question. We looked at responses to this question as well as placement of this question.
3. A new path to collect whether the address was vacant and new vacancy questions. Based on the findings from this usability testing, we recommended some changes to the question order and the wording of questions and response options.
In the following sections we outline the research questions, the methodology used in the pretest - focusing on the usability testing of the internet instrument, findings, recommendations and whether those recommendations were adopted in the March 2023 field test.
2. RESEARCH QUESTIONS
One research question in this pretesting was related to a redesigned address collection screen. In the 2020 Census, respondents were asked to enter their address number and street name into separate fields. Previous usability testing recommended that the address collection question be changed to ask for address number and street name in a single field. The current testing further investigated whether there were any issues related to the redesigned address collection screen.
We also tested a new question that asks respondents who they are including in their responses on the census questionnaire, and whether that encompasses everyone who lives at that address or only some of the people. Since this is a new question, our testing was looking for any cognitive or usability issues with that question.
Part of our testing was also looking at the flow for questions about reporting vacant units. Participants recruited for this study were not recruited based on having a vacant residence, so we created vignettes to test whether there were any apparent cognitive or usability issues with the questions.
Finally, we had an overarching research goal of identifying any potential general issues with the wording, instructions, layout, or display of any portions of the online instrument. The research questions for this study were answered based on participants' answers to the questions, combined with their spontaneous and think-aloud responses, as well as observational, behavioral, and satisfaction data.
3. METHODS
3.1 Session methodology
Testing was conducted one-on-one with a test administrator (TA) and individual participants, though sessions were occasionally observed by a notetaker or other observers who were members of the project team. Some sessions were conducted virtually, and others were conducted in person, as part of separate research into the effects of interview mode after the coronavirus pandemic saw a surge in remote virtual interviewing (Feuer et al. 2023b). The entire session was 90-minutes in length, both in person and virtually. The cognitive testing of mailing materials portion occurred during the first 45 minutes, with the cognitive and usability testing of the internet instrument taking the latter 45 minutes of the session. We obtained oral consent to collect data from each participant, and each participant received a monetary incentive of $60 for their participation. The TA trained participants how to "think aloud" during the session to provide us spontaneous comments as they completed the online questionnaire.
Virtual sessions were conducted using Microsoft (MS) Teams, a free videoconferencing application. MS Teams allows for screen sharing, audio, and webcam use. Prior to the session, the TA led the participant through a brief "tech check" to familiarize the participant with the technology, specifically how to share their screen and access the chat feature. Practicing how to use these features in the tech check saved time during the session itself. Virtual session participants used their own laptop, tablet, or smartphone.
For the in-person sessions, the participants joined an MS Teams meeting so that other project team members could observe the session. While in-person participants generally also used their own laptop or mobile device, the TA would provide a laptop if the participant encountered technical problems.
Each session also had a notetaker, when available, to assist the TA in capturing the participants' comments and behavior. Some sessions had other members of the project team observing the session. These observers could provide the TA with additional questions for the participant at the end of the session. The TA recorded the session using the Snagit recording application. The online questionnaire was tested in English and Spanish. The questionnaire was programmed in Qualtrics and followed the 2020 Census online questionnaire design with a few exceptions. Figure 1 shows the flow of the questionnaire for a generic participant.
* * *
Figure 1. Flow of questions in the 2023 Census Test Pre-testing
* * *
The tasks that the participant completed prior to and during the session were as follows:
* (Prior to the session) Completed "tech check" for remote sessions - Participants practiced joining an MS Teams session and sharing their screen.
* Task 1: Participants read the mailed census notifications and answered TA follow-up questions. Full results from this segment of the testing can be found in Feuer et al. 2023a.
* Task 2: Participants completed the online 2023 Census Test questionnaire.
* Task 3: Participants completed a satisfaction questionnaire about their experience with the 2023 Census Test questionnaire.
* Task 4: Participants answered TA debriefing questions about the questionnaire they had just completed.
* Task 5: If time allowed, participants answered a self-administered knowledge check questionnaire. The knowledge check asked participants' about their understanding of topics related to the census.
* Task 6: Participants were asked for any final comments or reactions on anything they encountered during the session.
Data used to answer the research questions for this study included spontaneous verbalizations elicited during the think-aloud portion of the session and participant feedback obtained during the debriefing portion of the study that included directed probes. Observational data was based on what the test administrator and notetaker observed participants doing throughout the session while answering the online questionnaire.
3.2 Participants
Seventeen participants (9 English-speakers and 8 monolingual Spanish-speakers) from across the country participated in the study. The study was split into both virtual and in-person sessions, where the in-person sessions were conducted in the Washington, D.C. metropolitan area and the virtual sessions were with participants from across the country. Participants completed the interview using the device they would normally use to respond to a questionnaire. We aimed to recruit participants who would complete the questionnaire using different devices, either a computer or a mobile device such as a smartphone or tablet. Seven English speakers completed the study on a laptop and two on a smartphone. For Spanish-speakers, three completed the study on a laptop, four completed it on a smartphone, and one participant completed it on a tablet. Participants with different household compositions were recruited, including those from unrelated households, such as roommates or unmarried partners, people living alone, and people living with nuclear family members. Approximately half of households for both English and Spanish participants had all members related to each other, and about half had at least one unrelated member. Other detailed demographics of the participants are provided in Table 1.
* * *
Table 1: Participant demographic information
* * *
4. RESULTS
4.1 Usability findings
4.1.1 Address Screen
After logging into the online questionnaire, participants saw an address collection screen. The address screen tested was updated from the 2020 Census address screen. In the 2020 Census instrument, the address screen asked respondents to enter their address number and street name in separate fields (see Figure 2). Previous usability testing had revealed that the separate fields did not work well and recommended a combined field. The proposed new address screen asked participants to enter their address number and street name in the same field (see Figure 3). Since the combined Address Number and Street Name fields were a recommendation of previous testing, the goal of this testing was to confirm that there were not unforeseen usability issues with the new design. The separate fields that were used in the 2020 Census were not tested again in this research.
* * *
Figure 2. 2020 Census Address screen with separate fields for address number and street name
* * *
Figure 3. 2023 Census Test Address screen with a combined field for address number and street name.
* * *
Most English-speaking participants entered their address information accurately and did not have any issues with the new address screen. These participants found the layout of the screen to be similar to other address screens they had completed before.
Some Spanish-speaking participants, however, struggled with the address screen. Some of these participants had to look up their address. Four Spanish speakers included portions of their address information in the wrong field. For instance, one Spanish speaker included the city and state in the Street Address field and put the ZIP code in the Apartment field, and another Spanish-speaking participant included the city and ZIP code in the Street Address field and repeated the address number in the Apartment field. These issues all occurred on mobile devices. However, it is not clear that the errors were due to device type. Because we did not want to point out respondent errors, we did not probe respondents on why they entered their information this way.
Two English-speaking college students did not know where to count themselves and counted themselves in the wrong place. However, their errors were not associated with the address field design, but about the concept of where they lived.
Based on our testing, we recommended that the 2023 Census Test continue to use the combined address number and street name field. Since the apartment field appeared to cause some confusion, we recommended that there be some kind of indication that the Apartment/Unit field is optional. We also recommended that the Spanish screen not use "apto." as an abbreviation for "apartamento" (apartment). Finally, since there was some confusion about where to count college students, we recommended including clarifying instructional language on where college students should count themselves.
In the 2023 Census Test that followed this cognitive and usability testing, both address screens (one with the combined address number and street name fields and one with the separated fields) were tested in a split ballot experiment. The response fields and labels in that test did not change based on this qualitative testing. Results of the 2023 Census Test are forthcoming.
4.1.2 Whole/Partial Screen
Instructions on the census questionnaire for the 2023 Census Test ask the respondent to list everyone living or staying at an address as of a particular date. Most respondents provide an exhaustive list of every person living or staying at an address. However, sometimes multiple questionnaires are submitted for the same address and sometimes those questionnaires have different people listed on them. This indicates that, in some cases, respondents are not completing the questionnaire for everyone living or staying at an address. The primary purpose of the Whole/Partial question (Figure 4) is to indicate whether the Census Bureau should expect an additional census questionnaire with different people for that address. If the respondent reports that they did not list everyone at their address and another questionnaire also arrives for that address, but with different people, then the Census Bureau would know that the two households should both be recorded as living at that address.
During pretesting, the Whole/Partial question came early in the questionnaire, after the question asking how many people live or stay at the address as shown in the questionnaire flow above in Figure 1.
* * *
Figure 4. Whole/Partial Question tested
* * *
Two problems were found during testing for the Whole/Partial question. The first problem was that the response options were confusing. We noticed that all participants had to read the response options carefully, and sometimes more than once, suggesting that they were not easily understood. One English-speaking participant who lived alone was confused by the response options because none of them addressed their situation specifically and they did not know what "Only myself but not for others" meant.
The second problem with the question was the placement of the question in the questionnaire.
It was problematic for both English and Spanish speakers.
* One English-speaking participant entered "3" for how many people lived at their address in the "Pop Count" question, but then answered 'Only myself and not for others' for the Whole/ Partial question. The participant then navigated to the next screen where the questionnaire asked for the names of the other two people. They then navigated backwards to change the number on the Pop Count question to '1'.
* Another English-speaking participant also verbalized that they were surprised that it asked for all household member names after they had said they were only going to be answering for themselves.
It seems like these two participants interpreted the Whole/Partial question to ask whether they want to report for everyone or only some of the people. These participants then decided that they only wanted to report for themselves. Still another English-speaking participant answered "Everyone at this address" but then did not know how to answer the race question thinking that they were answering it once for everyone. The intent of the question was not clear for these participants, and the question had unintended negative effects on their responses to other questions.
Spanish-speaking participants also had difficulties with this question. One participant found the question "unnecessary" since the census is supposed to count everyone. Another Spanish-speaker who lived alone reported being confused about whether to select "Everyone at this address" or "Only myself but not for others" since they both were correct. And a third Spanish-speaker reported "Some of the people at this address including myself", although additional probing revealed that this participant had in fact listed everyone.
Based on the results of this pretesting and our recommendations, the Whole/Partial question and the response options were redesigned for the field portion of the 2023 Census Test. The new question (Figure 5) was more similar to the one used in the 2020 Census User Experience Survey (see Nichols, Olmsted-Hawala, Feuer, 2021). The response options were changed to "I" statements, the reference date was removed, and a response option was included specifically for respondents who live alone. The question was also moved to the end of the questionnaire, after the rostering and demographic questions so that it would not influence how many people were listed or how the demographic questions were answered.
* * *
Figure 5. Redesigned Whole/Partial Question used in the 2023 Census Test
* * *
4.1.3 Vacant Path
The third goal of the testing was to determine if there were any cognitive or usability problems with existing vacancy questions. These questions (Figures 7 and 8) would appear if the respondent reports that no one lives at the reported address and the unit is vacant. We tested these questions using a vignette (see Figure 6) since it was unlikely anyone would enter an address of a vacant residence naturally during our testing, especially since we did not recruit specifically for this. The vignette was difficult for some participants to understand, but it did enable us to receive some useful feedback on the question and response choices.
* * *
Figure 6. Vignette used to test the vacancy path
* * *
Figure 7. Initial vacancy question used in the usability test
* * *
Figure 8. Follow up vacancy question if "Vacant Residence" is selected in Figure 7
* * *
Most participants had no issues with the initial vacancy question. Participants chose the correct response during the vignette and did not mention any troubles doing so. During probing participants mentioned that the response choices of "Address does not exist" and "Duplicate address" were confusing. One Spanish-speaking participant chose "Uninhabitable" when answering the first vacancy question in Figure 7, but it seemed to be associated with not understanding the vignette itself versus issues with the vacancy question. No one had any comments on the response choices for the reason for the vacancy in Figure 8.
For the 2023 Census Test, initial vacancy question was simplified to the version shown in Figure 9. This was removing the text "What will be the status of [ADDRESS] on [REFDATE]?". None of response choices changed.
* * *
Figure 9. Initial vacancy question used in the 2023 Census Test
* * *
4.1.4 Other usability issues and observations
4.1.4.1 Building the roster of individuals to be counted at the address
Several questions in the online census instrument are used to create a list of people living or staying at an address. Respondents are asked the total number of people living at the address, their name, the names of the other individuals, and a final question trying to make sure they did not leave off people inadvertently.
During probing, some English-speaking participants wondered if they should answer about the people staying in the household for that specific day or where people spend most of their time. In general, they asked questions about the intent of the roster questions and wondered how long someone needs to stay with them before they should be included. The knowledge check also indicated that some participants did not know how to define who was staying in the household for the census. In the knowledge check, five of the seven English-speaking participants and one of the eight Spanish-speaking participants said that the Census Bureau wants people to be counted where they live on a particular day while the other two English-speakers and seven Spanish-speakers said that the Census Bureau wants people to be counted where they live most of the time. (One English-speaking participant did not complete the knowledge check, and another did not answer this question.)
Another issue was that some participants did not feel comfortable entering the answers for other people in their household, particularly among those who lived with people who were not related to them. For example, one English-speaking participant struggled with the names of their roommates. All the participant's roommates went by a name that was not their legal name, but the participant understood the government form was asking for legal given names. Additionally, two Spanish-speaking participants who rented rooms in larger houses did not know the last names of some household members.
We also found a usability issue with a button label. On one of the roster screens (see Figure 10), a button was labeled "Add Person." If the participant clicked the button, it would add another row to the list. One English-speaking participant did not know if they needed to click on the button to add the names to the list. Although they were initially confused, they were able to figure this out and proceed with the questionnaire. We recommended changing the label to "Add another person" and that recommendation was implemented for the 2023 Census Test.
* * *
Figure 10. Roster screen with Add Person button and Jane Doe as the name already listed
* * *
Another issue with the Roster screen was how it displayed the respondent's name on the screen. The respondent's name is collected on a prior screen and is shown on the roster screen to indicated they have been listed already. Figure 10 shows an example where Jane Doe is the respondent. The Roster screen says "The names listed so far are: Jane Doe", but "Jane Doe" is not shown in "First Name" and "Last Name" fields.
Not all participants saw that their name was already listed, or they saw their name but didn't know whether or not to list it again. One English-speaking participant reentered her name and didn't realize it was listed twice until moving forward to the next screen. Another participant spontaneously mentioned that it was not clear whether to list their name again.
Based on our findings, we recommended that the respondent's name be listed in the response fields in the first row (see the mockup in Figure 11). This recommendation was accepted for the 2023 Census Test.
* * *
Figure 11. Mockup of recommended changes to Roster screen
* * *
4.1.4.2 Owned or Rented
Two of the three Spanish-speaking participants who were renting rooms in houses answered mistakenly that the home was rented (see Figure 12), despite being owned by another household member who was also listed on the roster.
* * *
Figure 12. Owner/renter question in Spanish
Some participants mentioned that they did not like the questions marks after the response options in this question. One participant said that although she understood why the question marks were there, she said, "The question mark bothers me." No changes were made to this question although we recommend researching the benefit of the question marks.
4.1.4.3 Age and Date of Birth questions
In the online questionnaire, age automatically calculates as of Census Day and appears in the age field when a valid date of birth is entered. Then, the respondent is asked to confirm the age and can overwrite it if needed. Confirming the age is complicated for participants whose birthdays fall around Census Day with one English-speaking participant saying, "now you are going to make me do math." The confirmation of age as of Census Day (and not the present day) may mean that the respondent is confirming an age that they were (if they are answering the census after Census Day and their birthday falls between Census Day and the current date) or an age that they will be (if they are answering the census before Census Day and their birthday falls between the current date and Census Day). An experiment in 2006 investigated this issue and found evidence for this confusion (see Nichols, Childs, Rodriguez, 2006). Our recommended solution is to confirm the respondent's current age rather than their age on a specific day. No changes were made to the questionnaire for the 2023 Census Test.
* * *
Figure 13. Date of birth and age
* * *
4.1.4.4 Hispanic Origin and Race
The Hispanic origin and race questions were confusing for both English and Spanish-speaking participants. The 2020 Census version of the question was used in this current pretesting (see Figure 14 below) and the same issues identified during this testing have been observed before (see Olmsted-Hawala & Nichols, 2020). For example, having the Hispanic origin question come first and on its own screen, separate from the race question has been shown to be confusing to some respondents. One English-speaking participant reported being annoyed with the Hispanic origin question because she interpreted it as the race question and could not find her response option, she said "whatever happened to good old single White female." This participant also commented on the numerous Asian race categories on the race screen. Most English-speaking participants left the detailed race write-in fields blank with some participants confused by the intent of the write-in request - whether it was for what country they were from or their culture. Multiple Spanish-speaking participants took a lot of time reading the race question because they were looking for a Hispanic/Latino category which was not there. Some of them mentioned after the interview that the race question was upsetting because it was so difficult. These questions were being researched by OMB (https://spd15revision.gov/) at the time and no changes were made to the 2023 Census Test based on our testing.
* * *
Figure 14. Hispanic origin question
* * *
Figure 15. Race question
* * *
4.1.4.5 Expectations for the end of the Questionnaire
Upon completion of the online questionnaire, participants saw a screen with a confirmation number. Participants reported that they liked having the confirmation number on the last screen showing that they had completed the questionnaire. Participants said they assumed they would also receive an email confirmation about completing their census. We recommended testing an email confirmation, but it was not implemented.
4.2 Satisfaction findings
At the end of the test session, we asked participants a series of satisfaction questions about their experience with the online questionnaire. Each topic was asked using a 7-point Likert scale. Table 2 provides averaged responses by language and combined for each of the nine questions.
* * *
Table 2: Average responses by language to satisfaction questions
* * *
4.3 Knowledge check
During the final part of the usability session, and when time allowed, participants answered a series of "knowledge check" questions so that researchers could learn more about participants' knowledge about topics related to the census (see Appendix A). Of the 17 participants across both languages, 15 completed the knowledge check questions (8 English, 7 Spanish), although some did not answer all the questions in this section. This information can inform what barriers or motivators participants consider when they are asked to complete the census.
Some highlights from this section of the debriefing include:
* While all 15 participants who responded to this question (8 English, 7 Spanish) correctly understood that the law requires answering the census, they were not sure whether they must answer each question in the census.
* Another question in the knowledge check asked whether you can leave the answer to a question blank and move on to the next question on the census if you do not know the answer. Of the 16 participants who responded to this (8 English, 8 Spanish), 10 said that they did not know, while five said that you could leave a question blank and one said that you could not.
When asked about where people should be counted - whether it is where they live most of the time or where they live on a specific day - there was a marked difference between English- and Spanish-speaking participants. Of the seven English speakers who responded to this question, five believed they should count people where they live on a particular day, while only two answered that they should count people where they live most of the time. Conversely, of the eight Spanish speakers who responded, only one said that they should count people where they live on a particular day, while seven said they should count people where they live most of the time. Although it is not immediately clear why this would differ by language, it appears to not be clear to participants where people should be counted given these two options.
5. SUMMARY RECOMMENDATIONS
Three changes to the online census questionnaire design used in 2020 were pretested in the fall of 2022 in preparation for the field portion of the 2023 Census Test. Those included: - Address field changes where street number and street name were combined into one field called street address.
* The addition of a path for vacant addresses.
* The addition of a question asking whether the respondent had included everyone on the roster.
Participants struggled with entering their address accurately in both languages, but not because of the combined street number and street name field. We recommend keeping the design of entering the house number and street name in one field and to indicate that filling in the Apartment field is not required. For the Spanish version, we recommend not abbreviating the term "apartamento" (apartment). The address instructions also proved to be confusing for college students, so clarifying instructional language on where a college students should count themselves is advised.
We did not identify any critical usability or cognitive problems with the new questions and question order for identifying housing units that are vacant. However, we were only able to test this path using a vignette.
We identified high priority usability and cognitive issues with the Whole/Partial question asking whether the respondent listed everyone or only some of the people. The original placement of that question early in the online questionnaire implied to participants that they did NOT have to report for everyone, and even caused the critical issue of some participants going back and making edits to eliminate people they had originally counted. Because of this misunderstanding of the question, the question will be placed at the end of the questionnaire for the field test so as not to influence how many people are listed. Additionally, the response choices were confusing, especially for single-person households. Response options were modified to use "I" statements as well as adding a specific response option for those who live alone.
6. REFERENCES
Feuer, S., Berger, M., Olmsted, E., Rivas, A. (2023a). Cognitive Testing of the 2023 Census Test Mailing Materials in English and Spanish. Research and Methodology Directorate, Center for Behavioral Science Methods Research Report Series (Survey Methodology #2023-06). U.S. Census Bureau. Available online at https://www.census.gov/library/workingpapers/2023/adrm/rsm2023-06.html
Feuer, S., Olmsted-Hawala, E., Nichols, E. (2023b). Conducting cognitive interviews and usability testing remotely versus in-person: The interplay between qualitative method and interview mode. In 78th Annual AAPOR Conference. AAPOR. https://aapor.confex.com/aapor/2023/meetingapp.cgi/Paper/1474
Nichols, E., Childs, J. H., Rodriquez, R. (2008). 2006 Questionnaire Design and Experimental Research Survey: Demographic Questions Analysis. Research Report Series #2008-1, U.S. Census Bureau. Available online at https://www.census.gov/library/workingpapers/2008/adrm/rsm2008-01.html
Nichols, E., Olmsted-Hawala, E., Feuer, S. (2021). 2020 Census User Experience Survey Report. Research and Methodology Directorate, Center for Behavioral Science Methods Research Report Series (Survey Methodology #2021-03). U.S. Census Bureau. Available online at https://www.census.gov/library/working-papers/2021/adrm/rsm2021-03.html
Olmsted-Hawala, E. L., Nichols, E. M. (2020). Usability Testing Results Evaluating the Decennial Census Race and Hispanic Origin Questions Throughout the Decade: 2012-2020. Research and Methodology Directorate, Center for Behavioral Science Methods Research Report Series (Survey Methodology #2020-02). U.S. Census Bureau. Available online at https://www.census.gov/library/working-papers/2020/adrm/rsm2020-02.html
United States Census Bureau. (2025). 2030 Census Research and Testing. Department of Commerce, United States Census Bureau. https://www.census.gov/programssurveys/decennial-census/decade/2030/planning-management/plan/research-and-testing.html
* * *
The paper is posted at: https://www2.census.gov/library/working-papers/2025/adrm/cbsm/rsm2025-10.pdf
BLS: Productivity Up 6.9 Percent in Long-distance General Freight Trucking in 2024
WASHINGTON, July 8 (TNSLrpt) -- The U.S. Department of Labor Bureau of Labor Statistics issued the following document on July 7, 2025, from Economics Daily:* * *
Productivity up 6.9 percent in long-distance general freight trucking in 2024
Labor productivity rose in 20 of 31 selected service-providing industries in 2024. Output rose in 21 industries while hours worked increased in 13 industries. Among the 10 largest industries (by number of workers employed in 2024), productivity growth was highest in long-distance general freight trucking, with an increase of 6.9 percent. Productivity increased ... Show Full Article WASHINGTON, July 8 (TNSLrpt) -- The U.S. Department of Labor Bureau of Labor Statistics issued the following document on July 7, 2025, from Economics Daily: * * * Productivity up 6.9 percent in long-distance general freight trucking in 2024 Labor productivity rose in 20 of 31 selected service-providing industries in 2024. Output rose in 21 industries while hours worked increased in 13 industries. Among the 10 largest industries (by number of workers employed in 2024), productivity growth was highest in long-distance general freight trucking, with an increase of 6.9 percent. Productivity increasedby 4.0 percent in engineering services and 3.5 percent in full-service restaurants.
* * *
Chart: Percent change in productivity, output, and hours worked in selected service-providing industries, 2024
* * *
In long-distance general freight trucking, the 6.9-percent increase in productivity resulted from a 2.1-percent increase in output combined with a 4.5-percent decrease in hours worked. In engineering services, there were increases in both output and hours worked.
Automotive repair and maintenance posted the steepest productivity decline, a decrease of 6.1 percent, resulting from a 4.3-percent decrease in output accompanied by a 1.9-percent increase in hours worked. Productivity decreased 5.7 percent in couriers and messengers, with output decreasing 4.9 percent and hours worked increasing 0.8 percent.
These data are from the Productivity program. To learn more, see "Productivity and Costs by Industry: Selected Service-Providing Industries -- 2024." Also see more charts on productivity in service-providing industries. Data are preliminary and may be revised. Labor productivity describes the relationship between real output and the labor hours involved in its production. These measures show the changes from period to period in the amount of goods and services produced per hour worked.
* * *
SUGGESTED CITATION
Bureau of Labor Statistics, U.S. Department of Labor, The Economics Daily, Productivity up 6.9 percent in long-distance general freight trucking in 2024 at https://www.bls.gov/opub/ted/2025/productivity-up-6-9-percent-in-long-distance-general-freight-trucking-in-2024.htm (visited July 08, 2025).
* * *
View original text plus charts and tables here: https://www.bls.gov/opub/ted/2025/productivity-up-6-9-percent-in-long-distance-general-freight-trucking-in-2024.htm