Upstream will soon auto-delete trashed items after 30 days, but some people
use the trash as an archive folder, so to avoid unexpected data loss, this
implementation requires the user to explicitly enable auto-deletion.
- Fixed bug when web-vault is disabled.
- Updated sql-server version check to be simpler thx to @weiznich ( https://github.com/dani-garcia/bitwarden_rs/pull/1548#discussion_r604767196 )
- Use `VACUUM INTO` to create a SQLite backup instead of using the external sqlite3 application.
- This also removes the dependancy of having the sqlite3 packages installed on the final image unnecessary, and thus removed it.
- Updated backup filename to also have the current time.
- Add specific bitwarden_rs web-vault version check (to match letter patched versions)
Will work when https://github.com/dani-garcia/bw_web_builds/pull/33 is build (But still works without it also).
Some small changes in general:
- Moved the SQL Version check struct into the function.
- Updated hadolint to 2.0.0
- Fixed hadolint 2.0.0 warnings
- Updated github workflows
- Added .editorconfig for some general shared editor settings.
Updated several json response models.
Also fixed a few small bugs.
ciphers.rs:
- post_ciphers_create:
* Prevent cipher creation to organization without a collection.
- update_cipher_from_data:
* ~~Fixed removal of user_uuid which prevent user-owned shared-cipher to be not editable anymore when set to read-only.~~
* Cleanup the json_data by removing the `Response` key/values from several objects.
- delete_all:
* Do not delete all Collections during the Purge of an Organization (same as upstream).
cipher.rs:
- Cipher::to_json:
* Updated json response to match upstream.
* Return empty json object if there is no type_data instead of values which should not be set for the type_data.
organizations.rs:
* Added two new endpoints to prevent Javascript errors regarding tax
organization.rs:
- Organization::to_json:
* Updated response model to match upstream
- UserOrganization::to_json:
* Updated response model to match upstream
collection.rs:
- Collection::{to_json, to_json_details}:
* Updated the json response model, and added a detailed version used during the sync
- hide_passwords_for_user:
* Added this function to return if the passwords should be hidden or not for the user at the specific collection (used by `to_json_details`)
Update 1: Some small changes after comments from @jjlin.
Update 2: Fixed vault purge by user to make sure the cipher is not part of an organization.
Resolves #971
Closes #990, Closes #991
When ticking the 'Also rotate my account's encryption key' box, the key
rotated ciphers are posted after the change of password.
During the password change the security stamp was reseted which made
the posted key's return an invalid auth. This reset is needed to prevent other clients from still being able to read/write.
This fixes this by adding a new database column which stores a stamp exception which includes the allowed route and the current security stamp before it gets reseted.
When the security stamp check fails it will check if there is a stamp exception and tries to match the route and security stamp.
Currently it only allows for one exception. But if needed we could expand it by using a Vec<UserStampException> and change the functions accordingly.
fixes #1240
When using MariaDB v10.5+ Foreign-Key errors were popping up because of
some changes in that version. To mitigate this on MariaDB and other
MySQL forks those errors are now catched, and instead of a replace_into
an update will happen. I have tested this as thorough as possible with
MariaDB 10.5, 10.4, 10.3 and the default MySQL on Ubuntu Focal. And
tested it again using sqlite, all seems to be ok on all tables.
resolves #1081. resolves #1065, resolves #1050
If org owners/admins set their org access to only include selected
collections, then ciphers from non-selected collections shouldn't
appear in "My Vault". This matches the upstream behavior.
Diesel requires the following changes:
- Separate connection and pool types per connection, the generate_connections! macro generates an enum with a variant per db type
- Separate migrations and schemas, these were always imported as one type depending on db feature, now they are all imported under different module names
- Separate model objects per connection, the db_object! macro generates one object for each connection with the diesel macros, a generic object, and methods to convert between the connection-specific and the generic ones
- Separate connection queries, the db_run! macro allows writing only one that gets compiled for all databases or multiple ones
Currently, favorites are tracked at the cipher level. For org-owned ciphers,
this means that if one user sets it as a favorite, it automatically becomes a
favorite for all other users that the cipher has been shared with.
Main changes:
- Splitted up settings and users into two separate pages.
- Added verified shield when the e-mail address has been verified.
- Added the amount of personal items in the database to the users overview.
- Added Organizations and Diagnostics pages.
- Shows if DNS resolving works.
- Shows if there is a posible time drift.
- Shows current versions of server and web-vault.
- Optimized logo-gray.png using optipng
Items which can be added later:
- Amount of cipher items accessible for a user, not only his personal items.
- Amount of users per Org
- Version update check in the diagnostics overview.
- Copy/Pasteable runtime config which has sensitive data changed or removed for support questions either on the forum or github issues.
- Option to delete Orgs and all its passwords (when there are no members anymore).
- Etc....
PostgreSQL updates/inserts ignored None/null values.
This is nice for new entries, but not for updates.
Added derive option to allways add these none/null values for Option<>
variables.
This solves issue #965
I've checked the spots when `Invitation::new()` and `Invitation::take()`
are used and it seems like all spots are already correctly gated. So to
enable invitations via admin API even when invitations are otherwise
disabled, this check can be removed.
Because of differences in how .on_conflict() works compared to .replace_into() the PostgreSQL backend wasn't correctly ensuring the unique constraint on user_uuid and atype wasn't getting violated.
This change simply issues a DELETE on the unique constraint prior to the insert to ensure uniqueness. PostgreSQL does not support multiple constraints in ON CONFLICT clauses.
- Added security check for previouse used codes
- Allow TOTP codes with 1 step back and forward when there is a time
drift. This means in total 3 codes could be valid. But only newer codes
then the previouse used codes are excepted after that.
This includes migrations as well as Dockerfile's for amd64.
The biggest change is that replace_into isn't supported by Diesel for the
PostgreSQL backend, instead requiring the use of on_conflict. This
unfortunately requires a branch for save() on all of the models currently
using replace_into.