Join us at Microsoft Azure Virtual Training Day:
Fundamentals for an introduction to cloud computing concepts, models, and
services—from public to private to hybrid cloud. You’ll also learn the basics
of infrastructure as a service (IaaS), platform as a service (PaaS), and
software as a service (SaaS). Prepare to take full advantage of the key Azure
pillars of security, privacy, compliance, and trust.
Who should attend:
Developers
Architects
IT professionals
Cloud novices
Technical level: For those who are just beginning
to work with cloud-based solutions and services or are new to Azure.
After completing this training, you will be able
to:
Prepare for cloud migration with the training
to evaluate your existing on-premises environment.
Optimize your Azure-based workloads for
maximum ROI.
Learn best practices for managing your virtual
machines, applications, and data with Azure.
Click here for
the Microsoft digital event code of conduct.
For questions about discounted exam eligibility,
scheduling, or expirations, please refer to the FAQ.
Microsoft Azure Virtual Training Day: Fundamentals event, including the exam
upon completing the event, is open to the public and offered at no cost. Prior
to registering for this training, government employees must check with their
employers to ensure their participation is permitted, and in accordance with,
applicable policies and laws.
Azure Fundamentals (AZ-900 exam)
is the foundational level exam in the new Azure certifications path. This exam
is intended for those who want to demonstrate their basic knowledge of cloud
services with Microsoft Azure. Even if you are from a non-technical background
but if you have some basic understanding of the concepts of cloud, you can take
this certification exam. You can check your preparation level with the AZ-900 practice test but
before that we recommend you to try Free AZ-900 practice test.
Exam AZ-900: Microsoft
Azure Fundamentals
AZ-900 Azure
Fundamentals exam is considered as the first
step in the associate level and expert level Azure certifications path.
Although it’s an optional step, validating your foundational knowledge will
benefit you to some extent. Also, it’s not mandatory but AZ-900 exam can be considered as
the entry-point for all other associate and expert level Azure certifications.
You can try out Azure fundamental practice tests to
prepare for the AZ-900 exam.
Let’s have a look at the details of the Azure
AZ-900 exam.
Prerequisites:
There are no particular prerequisites for the Microsoft
Azure Fundamental exam but one should be familiar and have a basic
understanding of the cloud services and the Microsoft Azure platform.
Domains Covered in AZ-900 Exam are –
Domain
% Weight
Describe Cloud Concepts
15-20%
Describe Core Azure Services
30-35%
Describe Security, Privacy, Compliance, and Trust
25-30%
Describe Azure Pricing, Service Level Agreements, and
Lifecycles
25-30%
Exam Languages: English,
Japanese, Simplified Chinese, Korean, German, Spanish, and French
Certification Cost: USD 99
AZURE Practice exams:
Answer Sheet
Q1) You are working in a company that plans to migrate their application to Azure. This application will be responsible to host the banking records for its users. You have been asked to suggest a security information event management (SIEM) and security orchestration automated response (SOAR) solution. Which of the following Azure service will mee the requirement?
Azure CycleCloud
Azure Sphere
Azure Sentinel
Azure Security Center
Q2)
Let us suppose ABC Ltd. plans to migrate all its data and resources to Azure. The company’s migration plan states that only platform as a service (PaaS) solutions must be used in Azure. Now you are required to deploy an Azure environment which supports the planned migration.
Solution: In this case, you create an Azure App Service and Azure virtual machines that have Microsoft SQL Server installed.
Does the solution meet the desired goal?
Yes, the solution meets the desired goal.
No, the solution does not meet the desired goal.
Q3) Let us suppose a company wants to try out some services that are being offered by Azure in Public Preview.
In this case, should the company deploy resources which are part of Public Preview in their production environment?
Yes
No
Q4)
For the statement given below, select Correct if the statement is true, else select Incorrect if it is false.
“An Azure free account has a spending limit”.
Correct
Incorrect
Q5)
You have plans to deploy several Azure virtual machines. You are required to ensure that the services running on the virtual machines are available, even if a single data center fails.
Solution: You suggest deploy the virtual machines to two or more scale sets.
Does the suggested solution meet the desired goal?
Yes, the solutions meets the desired goal
No, the solution does not meet the desired goal
Q6) An organization plans to migrate its application named QuickApp1 to Azure. As per the observed pattern QuickApp1 has a low usage during the second and fourth weeks and high usage during the first and third weeks of the month Which amongst the following benefit of Azure Cloud Services will support cost management for this kind of usage pattern?
Load balancing
Elasticity
High availability
Fault tolerance
Q7) Which of the following Azure Service would you suggest when you are planning to create an application with an event-based architecture that has the feature to ingest events from Blob storage and create custom topics?
Azure Logic Apps
Azure Functions
Azure Machine Learning Studio
Azure Event Grid
Q8)
For the statement given below, select Correct if the statement is true, else select Incorrect if it is false.
“Azure resources can only access other resources in the same resource group ”.
Correct
Incorrect
Q9) Your organization is planning to build a customized solution for uploading weather data to Azure using several million sensors. Which of the given service should the company use to connect, monitor, and control the sensors without managing the infrastructure?
Azure Files
Azure Virtual Machine
Azure IoT Hub
Azure App Service
Q10) ______________ offers real-time analytics and complex event-processing engine.
TSQL To assign DB ROLE BULK of particular databases
--SELECT * FROM dbo.dbRolesUsersMap ('db_role_name')
--SELECT * FROM dbo.dbRolesUsersMap (DEFAULT)
use db_name
Declare @CustomeRole varchar(8000)
Declare @provideuserrolerole varchar(8000)
Declare @user varchar(30)
Declare @assignrole varchar(100)
Declare @i int
Declare @j int
set @assignrole='db_role_name'
Select Distinct Rno=ROW_NUMBER()over(Order by members.name), members.name as 'members_name'
into ##providedbprivilgeobjects
FROM sys.database_role_members rolemem
INNER JOIN sys.database_principals roles
ON rolemem.role_principal_id = roles.principal_id
INNER JOIN sys.database_principals members
ON rolemem.member_principal_id = members.principal_id
where roles.name not in (SELECT Login_Name FROM dbo.dbRolesUsersMap ('db_role_name'))
and members.name not like '%\%'
ORDER BY members.name
Select @i=1,@j=COUNT(*) from ##providedbprivilgeobjects
While @i <= @j
Begin
Select @user=members_name from ##providedbprivilgeobjects where Rno=@i
set @provideuserrolerole='Use db_name'+space(2)+'ALTER ROLE '+space(1)+'['+@assignrole+']'+space(2)+'ADD MEMBER '+space(2)+'['+@user+']'+''
--select @provideuserrolerole
Print @provideuserrolerole
--Execute(@provideuserrolerole)
SET @i=@i+1
End
drop table ##providedbprivilgeobjects
Note:
use db_name
CREATE FUNCTION dbo.dbRolesUsersMap (@dbRole SYSNAME = '%')
RETURNS TABLE
AS
RETURN (
SELECT
User_Type =
CASE mmbrp.[type]
WHEN 'G' THEN 'Windows Group'
WHEN 'S' THEN 'SQL User'
WHEN 'U' THEN 'Windows User'
END,
Database_User_Name = mmbrp.[name],
Login_Name = ul.[name],
DB_Role = rolp.[name]
FROM sys.database_role_members mmbr, -- The Role OR members associations table
sys.database_principals rolp, -- The DB Roles names table
sys.database_principals mmbrp, -- The Role members table (database users)
sys.server_principals ul -- The Login accounts table
WHERE Upper (mmbrp.[type]) IN ( 'S', 'U', 'G' )
-- No need for these system account types
AND Upper (mmbrp.[name]) NOT IN ('SYS','INFORMATION_SCHEMA')
AND rolp.[principal_id] = mmbr.[role_principal_id]
AND mmbrp.[principal_id] = mmbr.[member_principal_id]
AND ul.[sid] = mmbrp.[sid]
AND rolp.[name] LIKE '%' + @dbRole + '%'
)
GO
--SELECT * FROM dbo.dbRolesUsersMap ('db_role_name')
E: The repository 'http://archive.canonical.com/ubuntu bionic Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: The repository 'http://archive.ubuntu.com/ubuntu bionic Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: The repository 'http://archive.ubuntu.com/ubuntu bionic-updates Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: The repository 'http://archive.ubuntu.com/ubuntu bionic-backports Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: The repository 'http://archive.ubuntu.com/ubuntu bionic-security Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
Solutions:
·For some
reason APT can not find the Release file in the specified repositories
list.
To fix
this, I would suggest the following:
·Check if your connection to the Internet is
behind a firewall, limited in any way or behind a proxy and configure your
system and connection accordingly or change to a different connection if
available.
·Check if there is an APT proxy configuration
file by running the following command:
ls /etc/apt/apt.conf.d/*proxy*
If the
command returns back any results, move these files out of the /etc/apt/apt.conf.d/ directory
or delete them.
Revert your repositories list to the original
Ubuntu Bionic Beaver list by running the following command in the
terminal:
sudo nano /etc/apt/sources.list
A file
editor will be opened. Delete all the lines in it and then copy and paste the
following in the file editor:
deb http://archive.ubuntu.com/ubuntu bionic main universe multiverse restricted
deb http://security.ubuntu.com/ubuntu/ bionic-security main multiverse universe restricted
deb http://archive.ubuntu.com/ubuntu bionic-updates main multiverse universe restricted
Then, save
and close the file by pressing Ctrl + X then
press Y then
press Enter
To verify
you saved the file correctly, please run the following command in the terminal:
cat /etc/apt/sources.list
The output
should be exactly:
deb http://archive.ubuntu.com/ubuntu bionic main universe multiverse restricted
deb http://security.ubuntu.com/ubuntu/ bionic-security main multiverse universe restricted
deb http://archive.ubuntu.com/ubuntu bionic-updates main multiverse universe restricted
Ubuntu
repositories have a defined format. They should be for example something like deb http://archive.ubuntu.com/ubuntu
bionic main.
Explanation:
deb: These
repositories contain binaries or precompiled packages. These repositories are
required for most users.
http://archive.ubuntu.com/ubuntu: The URI
(Uniform Resource Identifier), in this case a location on the internet.
bionic: is the
release name of your Ubuntu installation.
main & restricted ...etc: are the
section names or components. There can be several section names, separated by
spaces.
After that, please update your repositories list by running the
following command in the terminal:
sudo apt update
You should now be able to install packages and update your system again.
Notice:
If you still get errors, please first back up /etc/apt/sources.list.d/ to your home
directory by running the following command in the terminal:
If you want to have more control over how your
sources are organized you can manually edit the /etc/apt/sources.list file and
add the apt repository line to the file.
For demonstration, we will enable
the CouchDB repository and install the software. CouchDB is a free and
open-source fault-tolerant NoSQL database maintained by the Apache Software
Foundation.
To add the repository open the
sources.list file with your text editor :
$ sudo nano /etc/apt/sources.list
Add the repository line to the end of
the file:
/etc/apt/sources.list
deb
https://apache.bintray.com/couchdb-deb bionic main
Another option is to create a
new the repository file under the /etc/apt/sources.list.d/ directory.
When manually configuring a repository you also need to manually
import the public repository key to your system. To do that use either wget or curl :
The command above should output OK which
means that the GPG key has been successfully imported and packages from this
repository will be considered trusted.
Before installing the packages from the newly added repository
you must update the package index:
sudo apt update
Once the package index is updated you can install packages from
the newly added repository:
sudo apt install couchdb
To remove swap file
$ rm .MERGE_MSG.swp
To remove mal function file
$ sudo rm
/etc/apt/sources.list.d/file.list
To vi editor
CommandPurpose
iSwitch to Insert mode.
EscSwitch to Command mode.
:wSave and continue editing.
:wq or ZZSave
and quit/exit vi.
To edit file on Nano editor and save
$ sudo nano /etc/apt/sources.list
Add file/modify file Then, save and close the file by pressing Ctrl + X then
press Y then
press Enter
To edit file on vi editor and save
$ sudo vi /etc/apt/files.list
Type a colon (:) to move the cursor to the bottom of the screen.
This is where final commands can be made.
7Type wq.
wq is two individual commands: w for Write
(or Save) and q for
Quit. This command combination returns you to the command line.
(i.e) $ :wq
Commands:
Installation
InfluxDB can be installed by directly querying the official
repositories of this tool. First, add them to your list, by typing the
following commands:
sudo echo"deb
http://archive.ubuntu.com/ubuntu bionic main universe multiverse restricted" | sudo tee /etc/apt/sources.list.d/influxdb.list
sudo echo"deb
http://security.ubuntu.com/ubuntu/ bionic-security main multiverse universe
restricted" | sudo tee /etc/apt/sources.list.d/influxdb.list
sudo echo"deb
http://archive.ubuntu.com/ubuntu bionic-updates main multiverse universe
restricted" | sudo tee /etc/apt/sources.list.d/influxdb.list
(or)
Revert
your repositories list to the original Ubuntu Bionic Beaver list by
running the following command in the terminal:
sudo nano /etc/apt/sources.list
A file editor will be opened. Delete all the lines in it and then copy
and paste the following in the file editor:
deb http://archive.ubuntu.com/ubuntu
bionic main universe multiverse restricted
deb
http://security.ubuntu.com/ubuntu/ bionic-security main multiverse universe
restricted
deb http://archive.ubuntu.com/ubuntu
bionic-updates main multiverse universe restricted
Then, save and close the file by pressing Ctrl + X then
press Y then
press Enter
To verify you saved the file correctly, please run the following command
in the terminal:
cat /etc/apt/sources.list
The output should be exactly:
deb http://archive.ubuntu.com/ubuntu bionic main universe multiverse restricted
deb http://security.ubuntu.com/ubuntu/ bionic-security main multiverse universe restricted
deb http://archive.ubuntu.com/ubuntu bionic-updates main multiverse universe restricted
sudo apt update
Once the list of repositories has been updated, proceed with the
installation of InfluxDB using the command:
sudo apt install influxdb
Once installed, check the status of the service via:
sudo systemctl status influxdb
If, when typing this command, a result similar to the one shown
in the screenshot above appears, the tool has been correctly installed, but is
not running yet.
To start it and make sure that the service is always available
every time the machine is restarted, type the command:
sudo systemctl enable --now influxdb
In this way, not only will it start on the spot, but it will be
reloaded every time the server is restarted. To start it manually, run, as an
alternative to the command shown above, the following:
sudo systemctl start influxdb
To stop the execution of InfluxDB use the command:
sudo systemctl stop influxdb
Configuring
InfluxDB
The InfluxDB configuration file is located by default in the /etc/influxdb/influxdb.conf folder.
Many features are commented out. To enable them, simply open the
configuration file and delete the "#" symbols from the relevant line.
To modify the configuration file use the command:
sudo nano /etc/influxdb/influxdb.conf
For example, to access via HTTP request (enabling endpoints),
uncomment the "enabled" item in the "http" section, as
shown in the following screenshot:
When the necessary changes are made, close and exit using the
key combination CTRL + X / Y / Enter.
To apply the changes, the service has to be restarted. This
operation that can be carried out using the commands described above:
As with any database, after the installation the first thing to do
is to create an administrator account. This can be done using the following
command:
curl -XPOST "http://localhost:8086/query" --data-urlencode "q=CREATE USER admin WITH PASSWORD 'password' WITH ALL PRIVILEGES"
Clearly replace:
admin: with the user name;
password: with the password for the
database login.
Once the account is created, access the InfluxDB shell by using
the command:
influx -username 'admin' -password 'password'
N.B. also in this case, the “admin” and “password” parameters
have to be replaced with those previously declared.
In addition to
running queries directly from the shell, your queries can be submitted to
InfluxDB by using
tools such as "curl". The syntax to respect is the following:
N.B. If the
InfluxDB database is queried from a machine other than the one where the server
is installed, instead of ‘localhost’, enter the IP address of the machine to
query.
Enabling
the Firewall
Since InfluxDB can also be queried from the outside, it may be
necessary to update the firewall rules to allow it to connect.
If your firewall is UFW just
type the following commands:
sudo ufw allow 8086/tcp
This will allow TCP traffic on port 8086 used by InfluxDB for
querying the database from outside.