blobs/download: try backup if bad hash on main

At present, the logic only tries backup URLs when an
actual download fails (bad internet connection or the
server is down).

If the main download succeeds, but it has a bad checksum,
the backup download is not attempted.

Since wrongly hashed files are to be assumed useless, we
may aswell delete and try the next file. This will guard
against the possibility of a vendor changing their file,
without changing the file name (non-versioned files, for
example, may be subject to such changes).

Signed-off-by: Leah Rowe <leah@libreboot.org>
btrfsvols
Leah Rowe 2023-08-05 21:13:34 +01:00
parent f18b1859db
commit cdd83ab1ce
1 changed files with 14 additions and 6 deletions

View File

@ -452,12 +452,20 @@ fetch_update()
dl_path=${blobdir}/cache/${dlsum}
mkdir -p ${blobdir}/cache
vendor_checksum ${dlsum} || \
wget -U "${agent}" ${dl} -O ${dl_path} \
|| wget -U "${agent}" ${dl_bkup} -O ${dl_path}
vendor_checksum ${dlsum} || fail \
"Cannot guarantee intergity of vendor update for: ${board}"
dl_fail="y"
vendor_checksum ${dlsum} && dl_fail="n"
for x in "${dl}" "${dl_bkup}"; do
if [ "${dl_fail}" = "n" ]; then
break
fi
rm -f "${dl_path}"
wget -U "${agent}" ${x} -O ${dl_path}
vendor_checksum ${dlsum} && dl_fail="n"
done
if [ "${dl_fail}" = "y" ]; then
printf "Could not download blob file\n" 1>&2
return 1
fi
}
vendor_checksum()