注册 登录  
 加关注
   显示下一条  |  关闭
温馨提示!由于新浪微博认证机制调整,您的新浪微博帐号绑定已过期,请重新绑定!立即重新绑定新浪微博》  |  关闭

itoedr的it学苑

记录从IT文盲学到专家的历程

 
 
 

日志

 
 

openstack在UBUNTU下安装调试的例子  

2015-04-25 19:44:51|  分类: linux‘cloud |  标签: |举报 |字号 订阅

  下载LOFTER 我的照片书  |
注:为ubuntu软件源加入信任秘钥的命令:“sudo apt-key add *.key”

openstack在UBUNTU下安装调试的例子 - itoedr - itoedr的it学苑
一张简单的openstack理解图
 

目录

Network Environment

Mode single network node GRE at least need 3 pieces of card, and I’m here is to put all the services are installed in a node, there is no more than quantum agent, so I spent two network CARDS here.

1.Management Network: eth0 172.16.0.254/16 Used to mysql、AMQP
2.External Network: eth1 192.168.8.20/24 br-ex

NIC Settings

eth1 uesd to quantum external network,No IP address written in configuration files, in the back configuration when OVS will add a br-ex card information in the file.

# cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address 172.16.0.254
        netmask 255.255.0.0

auto eth1
iface eth1 inet manual
# /etc/init.d/networking restart
# ifconfig eth1 192.168.8.20/24 up
# route add default gw 192.168.8.1 dev eth1
# echo 'nameserver 8.8.8.8' > /etc/resolv.conf

To add the Grizzly sources and update package

# cat > /etc/apt/sources.list.d/grizzly.list << _GEEK_
deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main
deb  http://ubuntu-cloud.archive.canonical.com/ubuntu precise-proposed/grizzly main
_GEEK_
# apt-get install ubuntu-cloud-keyring
# apt-get update
# apt-get upgrade

Installing MySQL

# apt-get install python-mysqldb mysql-server

Use sed to edit /etc/mysql/my.cnf to change bind-address from localhost (127.0.0.1) to any (0.0.0.0) and restart the mysql service, as root.
Prohibited mysql do domain name resolution to prevent errors and remote connection to connect mysql mysql slow phenomenon.

# sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
# sed -i '44 i skip-name-resolve' /etc/mysql/my.cnf
# /etc/init.d/mysql restart

Installing RabbitMQ

Install the messaging queue server, RabbitMQ. You have the option of installing Apache Qpid.

# apt-get install rabbitmq-server

Installation and configuration Keystone

# apt-get install keystone

Delete the keystone.db file created in the /var/lib/keystone directory.

# rm -f /var/lib/keystone/keystone.db
Create a keystone database

To manually create the database, start the mysql command line client by running,create the keystone database.

Create a MySQL user for the newly-created keystone database that has full control of the keystone database.

# mysql -uroot -pmysql
mysql> create database keystone;
mysql> grant all on keystone.* to 'keystone'@'%' identified by 'keystone';
mysql> flush privileges; quit;
Change keystone.conf

Change /etc/keystone/keystone.conf:

admin_token = www.longgeek.com
debug = True
verbose = True
[sql]
connection = mysql://keystone:keystone@172.16.0.254/keystone                 #This line must be written [sql] under
[signing]
token_format = UUID

Start keystone services:

 /etc/init.d/keystone restart

Synchronization to the keystone the table data to the db:

 keystone-manage db_sync
Script to import data

Create user、role、tenant、service、endpoint:
Download scripts:

# wget http://download.longgeek.com/openstack/grizzly/keystone.sh

Custom script content:

ADMIN_PASSWORD=${ADMIN_PASSWORD:-password}     #Tenant admin password
SERVICE_PASSWORD=${SERVICE_PASSWORD:-password}              #nova,glance,cinder,quantum,swift password
export SERVICE_TOKEN="www.longgeek.com"    # token
export SERVICE_ENDPOINT="http://172.16.0.254:35357/v2.0"
SERVICE_TENANT_NAME=${SERVICE_TENANT_NAME:-service}      #Tenant 'service',Inclue nova,glance,ciner,quantum,swift other services.
KEYSTONE_REGION=RegionOne
KEYSTONE_IP="172.16.0.254"
#KEYSTONE_WLAN_IP="172.16.0.254"
SWIFT_IP="172.16.0.254"
#SWIFT_WLAN_IP="172.16.0.254"
COMPUTE_IP=$KEYSTONE_IP
EC2_IP=$KEYSTONE_IP
GLANCE_IP=$KEYSTONE_IP
VOLUME_IP=$KEYSTONE_IP
QUANTUM_IP=$KEYSTONE_IP

Running script:

# sh keystone.sh
Set the environment variables

Environment variables corresponding keystone.sh settings:

# cat > /root/export.sh << _GEEK_
export OS_TENANT_NAME=admin      #Set to service other services will not be verified.
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_AUTH_URL=http://172.16.0.254:5000/v2.0/
export OS_REGION_NAME=RegionOne
export SERVICE_TOKEN=www.longgeek.com
export SERVICE_ENDPOINT=http://172.16.0.254:35357/v2.0/
_GEEK_
# echo 'source /root/export.sh' >> /root/.bashrc
# source /root/export.sh
Verification keystone
keystone user-list
keystone role-list
keystone tenant-list
keystone endpoint-list
Troubleshooting Keystone

1. The view ports 5000 and 35357 is listening
2. View /var/log/keystone/keystone.log error message
3. keystone.sh Script execution error is resolved(check the contents of the script variable settings)

# mysql -uroot -pmysql
mysql> drop database keystone;
mysql> create database keystone; quit;
# keystone-manage db_sync
# sh keystone.sh

4. Step 6.5 error, go to view the log, check whether the 6.4 environment variables are set correctly

Installation and configuration Glance

Install glance
# apt-get install glance

Delete glance sqlite files:

# rm -f /var/lib/glance/glance.sqlite
Create glance DB
# mysql -uroot -pmysql
mysql> create database glance;
mysql> grant all on glance.* to 'glance'@'%' identified by 'glance';
mysql> flush privileges;
Change glance configuration file
Change glance-api.conf

Change the options below, other default.

verbose = True
debug = True
sql_connection = mysql://glance:glance@172.16.0.254/glance
workers = 4
registry_host = 172.16.0.254
notifier_strategy = rabbit
rabbit_host = 172.16.0.254
rabbit_userid = guest
rabbit_password = guest
[keystone_authtoken]
auth_host = 172.16.0.254
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = password
[paste_deploy]
config_file = /etc/glance/glance-api-paste.ini
flavor = keystone
Change  glance-registry.conf

Change the options below, other default.

verbose = True
debug = True
sql_connection = mysql://glance:glance@172.16.0.254/glance
[keystone_authtoken]
auth_host = 172.16.0.254
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = password
[paste_deploy]
config_file = /etc/glance/glance-registry-paste.ini
flavor = keystone

Start glance services:

# /etc/init.d/glance-api restart
# /etc/init.d/glance-registry restart
Synchronized to the db
# glance-manage version_control 0
# glance-manage db_sync
Check glance
# glance image-list
Upload image file

Download Cirros img use as a test, only 10M:

# wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
# glance image-create --name='cirros' --public --container-format=ovf --disk-format=qcow2 < ./cirros-0.3.0-x86_64-disk.img
Added new image with ID: f61ee640-82a7-4d6c-8816-608bb91dab7d

The Cirros img can use the username and password login, you can also use the key to landing, the user: cirros password: cubswin :)

Troubleshooting Glance

Ensure that the configuration file is right, 91919292 port exists
2. View /var/log/glance/ two log files
3. Ensure the environment variable OS_TENANT_NAME = admin, otherwise it will be reported 401 errors
4. Add mirror format corresponds to the format specified in the command

Install Openvswitch

# apt-get install openvswitch-brcompat openvswitch-switch

Set ovs-brcompatd start:

# sed -i 's/# BRCOMPAT=no/BRCOMPAT=yes/g' /etc/default/openvswitch-switch

Start openvswitch-switch:

# /etc/init.d/openvswitch-switch restart
 * ovs-brcompatd is not running            #brcompatd not running
 * ovs-vswitchd is not running
 * ovsdb-server is not running
 * Inserting openvswitch module
 * /etc/openvswitch/conf.db does not exist
 * Creating empty database /etc/openvswitch/conf.db
 * Starting ovsdb-server
 * Configuring Open vSwitch system IDs
 * Starting ovs-vswitchd
 * Enabling gre with iptables

To start again,Until ovs-brcompatd the ovs-vswitchd, ovsdb-server services start:

# /etc/init.d/openvswitch-switch restart
# lsmod | grep brcompat
brcompat               13512  0 
openvswitch            84038  7 brcompat

If you still can not start, use the following command:

/etc/init.d/openvswitch-switch force-reload-kmod
Add Bridge
Add External network bridge br-ex

Bridge br-ex with openvswitch add a network card eth1 br-ex:

# ovs-vsctl add-br br-ex
# ovs-vsctl add-port br-ex eth1

After doing the above operation, eth1 NIC is not working, manually set ip:

# ifconfig eth1 0
# ifconfig br-ex 192.168.8.20/24
# route add default gw 192.168.8.1 dev br-ex
# echo 'nameserver 8.8.8.8' > /etc/resolv.conf

Write to the configuration file:

# cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address 172.16.0.254
        netmask 255.255.0.0

auto eth1
iface eth1 inet manual
        up ifconfig $IFACE 0.0.0.0 up
        down ifconfig $IFACE down

auto br-ex
iface br-ex inet static
        address 192.168.8.20
        netmask 255.255.255.0
        gateway 192.168.8.1
        dns-nameservers 8.8.8.8

The restart card may appear:

RTNETLINK answers: File exists
Failed to bring up br-ex.

br-ex might ip address, gateway and DNS needs to be configured manually, or restart the machine. restart the machine after normal.

Create internal network br-int
# ovs-vsctl add-br br-int
View network
# ovs-vsctl list-br
br-ex
br-int
# ovs-vsctl show
1a8d2081-4ba4-4cad-8020-ccac5772836a
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "eth1"
            Interface "eth1"
    ovs_version: "1.4.0+build0"

Install quantum

Install Quantum server and Client API:

apt-get install quantum-server python-cliff python-pyparsing python-quantumclient

Install openvswitch Plugin to support the OVS:

apt-get install quantum-plugin-openvswitch
Create Quantum DB
# mysql -uroot -pmysql
mysql> create database quantum;
mysql> grant all on quantum.* to 'quantum'@'%' identified by 'quantum';
mysql> flush privileges; quit;
Configuration /etc/quantum/quantum.conf
# cat /etc/quantum/quantum.conf | grep -v ^$ | grep -v ^#
[DEFAULT]
debug = True
verbose = True
state_path = /var/lib/quantum
lock_path = $state_path/lock
bind_host = 0.0.0.0
bind_port = 9696
core_plugin = quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
api_paste_config = /etc/quantum/api-paste.ini
control_exchange = quantum
rabbit_host = 172.16.0.254
rabbit_password = guest
rabbit_port = 5672
rabbit_userid = guest
notification_driver = quantum.openstack.common.notifier.rpc_notifier
default_notification_level = INFO
notification_topics = notifications
[QUOTAS]
[DEFAULT_SERVICETYPE]
[SECURITYGROUP]
[AGENT]
root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
[keystone_authtoken]
auth_host = 172.16.0.254
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = password
signing_dir = /var/lib/quantum/keystone-signing
Configuration Open vSwitch Plugin
# cat /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini | grep -v ^$ | grep -v ^#
[DATABASE]
sql_connection = mysql://quantum:quantum@172.16.0.254/quantum
reconnect_interval = 2
[OVS]
enable_tunneling = True
tenant_network_type = gre
tunnel_id_ranges = 1:1000
local_ip = 10.0.0.1
integration_bridge = br-int
tunnel_bridge = br-tun
[AGENT]
polling_interval = 2
[SECURITYGROUP]
Start quantum services
# /etc/init.d/quantum-server restart
Install OVS agent
# apt-get install quantum-plugin-openvswitch-agent

When Start ovs ssh-agent ensure ovs_quantum_plugin. Local_ip exists in the ini. Ensure that br – int the bridge has been created.

# /etc/init.d/quantum-plugin-openvswitch-agent restart

After starting ovs-agent automatically creates a br-tun bridge configuration file according to the:

# ovs-vsctl list-br
br-ex
br-int
br-tun
# ovs-vsctl show
1a8d2081-4ba4-4cad-8020-ccac5772836a
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "eth1"
            Interface "eth1"
    Bridge br-tun
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    ovs_version: "1.4.0+build0"
Install quantum-dhcp-agent
# apt-get install quantum-dhcp-agent

Configuration quantum-dhcp-agent:

# cat /etc/quantum/dhcp_agent.ini | grep -v ^$ | grep -v ^#
[DEFAULT]
debug = True
verbose = True
use_namespaces = True
signing_dir = /var/cache/quantum
admin_tenant_name = service
admin_user = quantum
admin_password = password
auth_url = http://172.16.0.254:35357/v2.0
dhcp_agent_manager = quantum.agent.dhcp_agent.DhcpAgentWithStateReport
root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
state_path = /var/lib/quantum
interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = quantum.agent.linux.dhcp.Dnsmasq

Start service:

# /etc/init.d/quantum-dhcp-agent restart
Install L3 Agent
# apt-get install quantum-l3-agent

Configuration L3 Agent:

# cat /etc/quantum/l3_agent.ini | grep -v ^$ | grep -v ^#
[DEFAULT]
debug = True
verbose = True
use_namespaces = True
external_network_bridge = br-ex
signing_dir = /var/cache/quantum
admin_tenant_name = service
admin_user = quantum
admin_password = password
auth_url = http://172.16.0.254:35357/v2.0
l3_agent_manager = quantum.agent.l3_agent.L3NATAgentWithStateReport
root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver

Start L3 agent:

# /etc/init.d/quantum-l3-agent restart
Configuration Metadata agent
# cat /etc/quantum/metadata_agent.ini | grep -v ^$ | grep -v ^#

[DEFAULT]
debug = True
auth_url = http://172.16.0.254:35357/v2.0
auth_region = RegionOne
admin_tenant_name = service
admin_user = quantum
admin_password = password
state_path = /var/lib/quantum
nova_metadata_ip = 172.16.0.254
nova_metadata_port = 8775

Start Metadata agent:

# /etc/init.d/quantum-metadata-agent restart
Troubleshooting Quantum

All configuration file is configured right, the 9696 port listen
2. /var/log/quantum/ under all log files
3. br-ex、br-int Advance is added
To understand Quantum network at the end of the document will be combined with commands and UI

Install Cinder

A Bug, in Grizzly Lane Cinder configured first to say it:

# apt-get install cinder-api cinder-common cinder-scheduler cinder-volume
Create DB
# mysql -uroot -pmysql
mysql> create database cinder;
mysql> grant all on cinder.* to 'cinder'@'%' identified by 'cinder';
mysql> flush privileges; quit;
Create a logical volume volume group cinder – volumes

Create a partition, I’m here with sdb create a primary partition size for all space

# fdisk /dev/sdb
n
p
1
Enter
Enter
t
8e
w
# partx -a /dev/sdb
# pvcreate /dev/sdb1
# vgcreate cinder-volumes /dev/sdb1
# vgs
  VG             #PV #LV #SN Attr   VSize   VFree
  cinder-volumes   1   0   0 wz--n- 150.00g 150.00g
  localhost        1   2   0 wz--n- 279.12g  12.00m
Modify the configuration file
Modify cinder.conf
# cat /etc/cinder/cinder.conf
[DEFAULT]
# LOG/STATE
verbose = True
debug = True
iscsi_helper = tgtadm
auth_strategy = keystone
volume_group = cinder-volumes
volume_name_template = volume-%s
state_path = /var/lib/cinder
volumes_dir = /var/lib/cinder/volumes
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_config = /etc/cinder/api-paste.ini
# RPC
rabbit_host = 172.16.0.254
rabbit_password = guest
rpc_backend = cinder.openstack.common.rpc.impl_kombu
# DATABASE
sql_connection = mysql://cinder:cinder@172.16.0.254/cinder
# API
osapi_volume_extension = cinder.api.contrib.standard_extensions
Modify api-paste.ini

Modify the file at the end of [filter: authtoken] fields:

paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
service_protocol = http
service_host = 172.16.0.254
service_port = 5000
auth_host = 172.16.0.254
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = password
signing_dir = /var/lib/cinder
Synchronous and start service

Syncchronized to the db:

# cinder-manage db sync
2013-03-11 13:41:57.885 30326 DEBUG cinder.utils [-] backend <module 'cinder.db.sqlalchemy.migration' from '/usr/lib/python2.7/dist-packages/cinder/db/sqlalchemy/migration.pyc'> __get_backend /usr/lib/python2.7/dist-packages/cinder/utils.py:561

Start service:

# for serv in api scheduler volume
do
    /etc/init.d/cinder-$serv restart
done
# /etc/init.d/tgt restart
Check
# cinder list
Troubleshooting Cinder

1.Services and 8776 ports start
2. /var/log/cinder in the log files
3. Dependent configuration file specified volume_group = cinder-volumes, volume groups exist
4. the tgt service properly.

Installing the Nova controller

Computing services installed at the same time dependent Grizzly Lane nova-compute nova-conductor, Click here

# apt-get install nova-api nova-novncproxy novnc nova-ajax-console-proxy nova-cert nova-consoleauth nova-doc nova-scheduler
# apt-get install nova-compute nova-conductor
Create database
# mysql -uroot -pmysql
mysql> create database nova;
mysql> grant all on nova.* to 'nova'@'%' identified by 'nova';
mysql> flush privileges; quit;
Configuration
Config nova.conf
# cat /etc/nova/nova.conf
[DEFAULT]
# LOGS/STATE
debug = True
verbose = True
logdir = /var/log/nova
state_path = /var/lib/nova
lock_path = /var/lock/nova
rootwrap_config = /etc/nova/rootwrap.conf
dhcpbridge = /usr/bin/nova-dhcpbridge
# SCHEDULER
compute_scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
## VOLUMES
volume_api_class = nova.volume.cinder.API
# DATABASE
sql_connection = mysql://nova:nova@172.16.0.254/nova
# COMPUTE
libvirt_type = kvm
compute_driver = libvirt.LibvirtDriver
instance_name_template = instance-%08x
api_paste_config = /etc/nova/api-paste.ini
# COMPUTE/APIS: if you have separate configs for separate services
# this flag is required for both nova-api and nova-compute
allow_resize_to_same_host = True
# APIS
osapi_compute_extension = nova.api.openstack.compute.contrib.standard_extensions
ec2_dmz_host = 172.16.0.254
s3_host = 172.16.0.254
# RABBITMQ
rabbit_host = 172.16.0.254
rabbit_password = guest
# GLANCE
image_service = nova.image.glance.GlanceImageService
glance_api_servers = 172.16.0.254:9292
# NETWORK
network_api_class = nova.network.quantumv2.api.API
quantum_url = http://172.16.0.254:9696
quantum_auth_strategy = keystone
quantum_admin_tenant_name = service
quantum_admin_username = quantum
quantum_admin_password = password
quantum_admin_auth_url = http://172.16.0.254:35357/v2.0
libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
# NOVNC CONSOLE
novncproxy_base_url = http://192.168.8.20:6080/vnc_auto.html
# Change vncserver_proxyclient_address and vncserver_listen to match each compute host
vncserver_proxyclient_address = 172.16.0.254
vncserver_listen = 0.0.0.0
# AUTHENTICATION
auth_strategy = keystone
[keystone_authtoken]
auth_host = 172.16.0.254
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = password
signing_dir = /tmp/keystone-signing-nova
Config api-paste.ini

modify [filter:authtoken]:

# vim /etc/nova/api-paste.ini
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 172.16.0.254
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = password
signing_dir = /tmp/keystone-signing-nova
Start nova service
# for serv in api cert scheduler consoleauth novncproxy conductor compute;
do
    /etc/init.d/nova-$serv restart
done
Sync
# nova-manage db sync
View Service

Appeared smiling face said corresponding service is normal, if the state is XX, pay attention to check the/var/log/nova/corresponding service under the log:

# nova-manage service list 2> /dev/null
Binary           Host                                 Zone             Status     State Updated_At
nova-cert        localhost                            internal         enabled    :-)   2013-03-11 02:56:21
nova-scheduler   localhost                            internal         enabled    :-)   2013-03-11 02:56:22
nova-consoleauth localhost                            internal         enabled    :-)   2013-03-11 02:56:22
nova-conductor   localhost                            internal         enabled    :-)   2013-03-11 02:56:22
nova-compute     localhost                            nova             enabled    :-)   2013-03-11 02:56:23
Group Policy

To the default group policy: the default add ping response and the SSH port:

# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
Troubleshooting Nova

1. Configuration file to specify parameters is the realistic environment
2. /var/log/nova/ In the corresponding service log
3. Dependent on environment variables, database connection port start
4. Hardware supports virtualization

Install Horizon

Install OpenStack Dashboard、Apache and WSGI module:

# apt-get install -y memcached libapache2-mod-wsgi openstack-dashboard

Configure the Dashboard, modify the Memcache listener address:

Get rid of ubuntu theme:

# mv /etc/openstack-dashboard/ubuntu_theme.py /etc/openstack-dashboard/ubuntu_theme.py.bak
# vim /etc/openstack-dashboard/local_settings.py
DEBUG = True
CACHE_BACKEND = 'memcached://172.16.0.254:11211/'
OPENSTACK_HOST = "172.16.0.254"
# sed -i 's/127.0.0.1/172.16.0.254/g' /etc/memcached.conf

Start Memcached and Aapache:

# /etc/init.d/memcached restart
# /etc/init.d/apache2 restart

Browser to visit:

 http://172.16.0.254/horizon
user:    admin
pass: password
Troubleshooting Horizon

1. Can not log in, pay attention to see / var/log/apache2/error.log and / var / log / on keystone / keystone.log
401 error generally main configuration file, the keystone of the quantum cinder nova configuration file
Authentication information is incorrect
2. Login [Errno 111] Connection refused error when general the cinder-api nova-api does not start,

Configuring External Network

Introduction

Float IP, or External is the External network, External network of br – ex, namely physical for eth1 card, for External network we only need to create a is enough, and all the tenants are using this one External to the outside network.

After we use the administrator to create an External network, the rest is given to each tenant to create their own network.

Quantum noun understand:
Network: divided into External and Internal two networks, which is a switch.
Subnet: network in which network segment, gateway and dns how
Router: A router can be used to the Internal network isolation between different tenants create your own.
Interface: WLAN and LAN port on the router
Port: The port on the switch, this port whom know the IP address information.

Quantum of network configuration is the do-it-yourself plug the network cable, a process connected router. For example: For example, a company via ADSL, dial-up Internet access, exports only one within the company is a local area network (External Network), however, this company consists of a number of departments (multiple tenants) A department (tenant) require frequent testing, IP address DHCP server and other departments (other tenants) conflict, only looking for a router (Router-1) a department and other departments to isolate network, sector a network address can not be set and router (Router-1) WLAN port on the same network bit, because the WLAN port IP router and the LAN port IP is not in the same network segment, this time you need to define a private network segment a departmental LAN port to the router, (the tenants themselves to create their own network subnet Router, and Interface added Router WLAN port, set the Interface External ip LAN port for Subnet contains the address). A department can be normal on the outside network (Port Router-1 Interface to External). Similarly, a number of departments are now isolated network, then to multiple routers (Router-2, 3,4,5 …) isolation.

Create an External Network

Note router: external = True parameter which refers to an External network

EXTERNAL_NET_ID=$(quantum net-create external_net1 --router:external=True | awk '/ id / {print $4}')
Create a Subnet

My Quantum version 2.0 source package has been updated to 2.2, there may be some small changes in the command parameters. I quantum command can not directly set the dns and host route. The following is my external network segment 192.168.8.0/24, Note that the gateway must be the network range you specify, for example, you specify the cidr is 192.168.8.32/24 gateway is 192.168.8.1, while 8.1 longer cidr range.
Create Float IP address, Subnet, Subnet DHCP service is disabled:

SUBNET_ID=$(quantum subnet-create external_net1 192.168.8.0/24 --name=external_subnet1 --gateway_ip 192.168.8.1 --enable_dhcp=False | awk '/ id / {print $4}')

Create an Internal Network

Tenants demo created, need demo’s id:

# DEMO_ID=$(keystone tenant-list | awk '/ demo / {print $2}')
Create demo tenants Internal Network

demo tenants: I plan to create a network for your department

# INTERNAL_NET_ID=$(quantum net-create demo_net1 --tenant_id $DEMO_ID | awk '/ id / {print $4}')
Demo tenants create Subnet

demo tenants: I’ll give you a definition of a network segment 10.1.1.0/24 gateway is 10.1.1.1 dhcp function is enabled by default

# DEMO_SUBNET_ID=$(quantum subnet-create demo_net1 10.1.1.0/24 --name=demo_subnet1 --gateway_ip 10.1.1.1 --tenant_id $DEMO_ID| awk '/ id / {print $4}')
Create a demo tenants Router

Brought give demo tenants to a router:

# DEMO_ROUTER_ID=$(quantum router-create --tenant_id $DEMO_ID demo_router1 | awk '/ id / {print $4}')
To add Router to Subnet on

Just told the demo application to just took on a router, the router LAN port address for: 10.1.1.1, 10.1.1.0/24 network segment to:

# quantum router-interface-add  $DEMO_ROUTER_ID $DEMO_SUBNET_ID
Add External IP to the Router

In the WLAN mouth plug in to the router connected to the outside network cable, and get an IP address from the External network set to WLAN mouth:

# quantum router-gateway-set $DEMO_ROUTER_ID $EXTERNAL_NET_ID

Demo tenants create a virtual machine

Given that we are about to start a virtual machine to create a Port, specify the virtual machine that Subnet and Network, to specify a fixed IP address:

# quantum net-list
+--------------------------------------+---------------+--------------------------------------+
| id                                   | name          | subnets                              |
+--------------------------------------+---------------+--------------------------------------+
| 18ed98d5-9125-4b71-8a37-2c9e3b07b99d | demo_net1     | 75896360-61bb-406e-8c7d-ab53f0cd5b1b |
| 1d05130a-2b1c-4500-aa97-0857fcb3fa2b | external_net1 | 07ba5095-5fa0-4768-9bee-7d44d2a493cf |
+--------------------------------------+---------------+--------------------------------------+
# DEMO_PORT_ID=$(quantum port-create --tenant-id=$DEMO_ID --fixed-ip subnet_id=$DEMO_SUBNET_ID,ip_address=10.1.1.11 demo_net1 | awk '/ id / {print $4}')

The demo start virtual machine:

# glance image-list
+--------------------------------------+--------+-------------+------------------+---------+--------+
| ID                                   | Name   | Disk Format | Container Format | Size    | Status |
+--------------------------------------+--------+-------------+------------------+---------+--------+
| f61ee640-82a7-4d6c-8816-608bb91dab7d | cirros | qcow2       | ovf              | 9761280 | active |
+--------------------------------------+--------+-------------+------------------+---------+--------+
# nova  --os-tenant-name demo boot --image cirros --flavor 2 --nic port-id=$DEMO_PORT_ID instance01

Add a virtual machine to the tenants of the demo Float ip

After the start of the virtual machine, you find that you are unable to ping 10.1.1.11, routers in isolation, of course you can not ping through, but the virtual machine can go out to the network. (Because the quantum version of the problem, not the DNS parameters option, the virtual machine’s DNSincorrect, modify under virtual machine resolv.conf) If you want to ssh to the virtual machine, plus a Floating IP:
View demo tenant VM id

# nova --os_tenant_name=demo list
+--------------------------------------+------------+--------+---------------------+
| ID                                   | Name       | Status | Networks            |
+--------------------------------------+------------+--------+---------------------+
| b0b7f0a1-c387-4853-a076-4b7ba2d32ed1 | instance01 | ACTIVE | demo_net1=10.1.1.11 |
+--------------------------------------+------------+--------+---------------------+

Get VM id

# quantum port-list -- --device_id b0b7f0a1-c387-4853-a076-4b7ba2d32ed1
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                        |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+
| 95602209-8088-4327-a77b-1a23b51237c2 |      | fa:16:3e:9d:41:df | {"subnet_id": "75896360-61bb-406e-8c7d-ab53f0cd5b1b", "ip_address": "10.1.1.11"} |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+

Create a Float ip

Pay attention to the collection of id:
# quantum  --os_tenant_name=demo floatingip-create external_net1
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.8.3                          |
| floating_network_id | 1d05130a-2b1c-4500-aa97-0857fcb3fa2b |
| id                  | f3670816-4d76-44e0-8831-5fe601f0cbe0 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | 83792f9193e1449bb90f78400974d533     |
+---------------------+--------------------------------------+

Associated floating IP to the VM

# quantum --os_tenant_name=demo floatingip-associate f3670816-4d76-44e0-8831-5fe601f0cbe0 95602209-8088-4327-a77b-1a23b51237c2
Associated floatingip f3670816-4d76-44e0-8831-5fe601f0cbe0

  View just are associated floating IP

# quantum floatingip-show f3670816-4d76-44e0-8831-5fe601f0cbe0
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| fixed_ip_address    | 10.1.1.11                            |
| floating_ip_address | 192.168.8.3                          |
| floating_network_id | 1d05130a-2b1c-4500-aa97-0857fcb3fa2b |
| id                  | f3670816-4d76-44e0-8831-5fe601f0cbe0 |
| port_id             | 95602209-8088-4327-a77b-1a23b51237c2 |
| router_id           | bf89066b-973d-416a-959a-1c2f9965e6d5 |
| tenant_id           | 83792f9193e1449bb90f78400974d533     |
+---------------------+--------------------------------------+
# ping 192.168.8.3
PING 192.168.8.3 (192.168.8.3) 56(84) bytes of data.
64 bytes from 192.168.8.3: icmp_req=1 ttl=63 time=32.0 ms
64 bytes from 192.168.8.3: icmp_req=2 ttl=63 time=0.340 ms
64 bytes from 192.168.8.3: icmp_req=3 ttl=63 time=0.335 ms

Tenants how to create a network on the dashboard?

Best browser chrome, firefox some button click can not.
Tenants create a test, I am here to be created with the command:

# TEST_TENANT_ID=$(keystone tenant-create --name test | awk '/ id / {print $4}')
# keystone user-create --name test --pass test --tenant-id $TEST_TENANT_ID

Test tenant login screen, and create your own network:

Click Netork Topology, you can see our directory created 13 External network:
grizzly_test

 

Next, the operation of the interface corresponding directory 14 steps
1 Select Networks button, click the Create Network and enter the network name:

grizzly_network

 

Subnet, enter the name, network address, and gateway:

grizzly_subnet

 

Subnet Detail, enter dhcp range, enter the DNS address can also add a static routing, static routes to other networks:

grizzly_dns

 

This time we can see the network you just created in the Network Topology:

grizzly_net_done

 

2. Routers, click Create Router, enter a name:

grizzly_router

 

Login router, click the just created test_router1 name into the Interface interface, click Add Interface (LAN port), just created network test_subnet:
grizzly_interface_add

 

In to take a look at the topology diagram:
interface_add_topology

 

Back to the Interface interface, to set an IP router WLAN port, the IP address from the External network to take one, select Add Gateway Interface:
grizzly_interface_gateway

 

Continue to Picture Talk:
interface_gateway_add

 

Test tenants to create a virtual machine network topology:
instance_topology

 

Admin administrator user login to view the network topology diagram, you can see the External network, tenants demo and test network:
admin_topology

 

In fact, the Quantum network is not complex, will be well understood as long as the corresponding combined to real life.

Reference material

http://www.longgeek.com/2012/07/30/rhel-6-2-openstack-essex-install-only-one-node/
http://www.chenshake.com/openstack-folsom-guide-for-ubuntu-12-04/#i-21
http://liangbo.me/index.php/2012/10/07/openstack-folsom-quantum-openvswitch/
http://www.ibm.com/developerworks/cn/cloud/library/1209_zhanghua_openstacknetwork/
http://docs.openstack.org/folsom/openstack-network/admin/content/index.html
http://docs.openstack.org/trunk/openstack-network/admin/content/index.html
http://docs.openstack.org/trunk/openstack-compute/install/apt/content/index.html

  评论这张
 
阅读(501)| 评论(0)
推荐 转载

历史上的今天

在LOFTER的更多文章

评论

<#--最新日志,群博日志--> <#--推荐日志--> <#--引用记录--> <#--博主推荐--> <#--随机阅读--> <#--首页推荐--> <#--历史上的今天--> <#--被推荐日志--> <#--上一篇,下一篇--> <#-- 热度 --> <#-- 网易新闻广告 --> <#--右边模块结构--> <#--评论模块结构--> <#--引用模块结构--> <#--博主发起的投票-->
 
 
 
 
 
 
 
 
 
 
 
 
 
 

页脚

网易公司版权所有 ©1997-2017