See updates across the data model, metadata structure, and API of our service. Breaking changes that require updates to data consumer applications are announced prior to their implementation.
Filter by components: Data model (5) · Export formats (1) · Metadata (1) · yente (2) · Hosted API (1) · Datasets (2)
Effective date: | |
---|---|
Components affected: | Data model |
Announcement: |
In the us_sam_exclusions
dataset based off the SAM.gov exclusion and debarment list, the designated entity's UEI is currently stored in the registrationNumber
field. Going forward, a new property, called uniqueEntityId
is available and will contain these identifiers. Starting Feb 1, 2025, we will then remove the UEI mapping to registrationNumber
.
Effective date: | |
---|---|
Components affected: | Data modelExport formats |
Announcement: |
We're phasing out the use of the target
flag throughout the system, and switching the export formats that are based on target
to use a defined list of topics
as their source of truth.
A binary flag (target
) is an insufficient method to describe what entities are associated with risk. For the past few months, we've been recommending the use of topics to decide if a match is relevant (e.g. as a PEP, sanctioned entity). However, some export formats - such as targets.nested.json
and targets.simple.csv
are still using targets to decide which entities to include.
On January 15, we will switch these two export formats (targets.nested.json
and targets.simple.csv
) to include any entities tagged with one the topics listed below. This is guaranteed to include all current targets, but will bring in additional entities that have topics assigned, but are not marked as targets. In short: the new exports will be more correct, and a bit larger.
This will result in the targets.nested.json
export of the default
dataset becoming equivalent to the topics.nested.json
export of the same collection. This export can be used for testing until the change becomes effective on January 15, 2025. We will eventually remove the topics.nested.json
export format on February 15, 2025, and only generated the file named targets.nested.json
going forward.
Topics included in new target definition:
corp.disqual
crime.boss
crime.fin
crime.fraud
crime.terror
crime.theft
crime.traffick
crime.war
crime
debarment
export.control
export.risk
poi
reg.action
reg.warn
role.oligarch
role.pep
role.rca
sanction.counter
sanction.linked
sanction
wanted
Effective date: | took effect on |
---|---|
Components affected: | Data model |
Announcement: |
The followthemoney
data model currently stores the citizenship of individuals in the nationality
property. After being advised the the two concepts are not identical in some jurisdictions, we've now also introduced a citizenship
property. From the effective date we will begin moving country affiliations for individuals in the citizenship
property if that nomenclature is used in the data source (e.g. the UK sanctions list).
Data consumers should check both properties in the future. To get a complete picture of the countries linked to an individual you may also want to check the birthCountry
and country
field. The latter serves as a catch-all field for affiliations that may not involve citizenship or holding a passport - simple residence might be enough.
See: Person schema.
Effective date: | took effect on |
---|---|
Components affected: | Data model |
Announcement: |
The permId
property for LSEG/Refinitiv company codes has been moved up from the Company
to the Organization
schema to enable reflecting government entities (using the PublicBody
schema) also receiving these identifiers.
Effective date: | took effect on |
---|---|
Components affected: | Data model |
Announcement: |
A soft length limit in Unicode codepoints has been added for all properties. These can be seen in the data dictionary. The goal of this is to make it easier for data consumers to import our data into systems with fixed-length column types.
Property values are not yet guaranteed to be limited to this value, but our tooling now alerts us when values are longer than this, so that we can identify sources which don’t adhere to sensible limits and eventually enforce hard limits.
Imposing a length limit has also identified many instances where the data required further cleaning, which we've implemented as needed.
Our monthly newsletter brings you product updates, new datasets, and upcoming changes.
Subscribe now