Search Results

Search found 12497 results on 500 pages for 'linked servers'.

Page 75/500 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • How do I fix DNS resolving which doesn't work after upgrading to Ubuntu 13.10 (Saucy)

    - by Witek
    After upgrade to 13.10 my DNS resolving fails. It seams the DNS servers which I get by DHCP (LAN) are not used. I could temporary solve the problem by adding nameserver 8.8.8.8 to /etc/resolv.conf. But then the intranet hosts still can not be resolved. When clicking on the Connection Information menu item on the network indicator, the Primary DNS and the Secondary DNS are set correctly. But my computer seams not to use them. So my questions: What should I put into resolv.conf, if anything? How to find out, which name servers my computer is querying? Where to look next, to find out, why name servers received by DHCP are not used?

    Read the article

  • Upgrade Ubuntu Server from 11.04 to 11.10 without internet connection

    - by Tony Marciano
    We have application software that really likes Ubuntu Server 11.10. I need to upgrade several 11.04 servers to this version. Two questions: The servers that need to be upgraded do not have Internet access in our datacenter due to security reasons. I need to download the updates/upgrades to a secure system and then transfer them to the datacenter servers for installation. Is anyone aware of the steps involved? How/where do I get the 11.10 updates from? I don't see an option on the Ubuntu site for downloading specific versions of the OS and/or upgrades.

    Read the article

  • How to host a simple website using a domain name I own

    - by Cedric Martin
    I'm familiar with hosting webapps when I'm doing "the whole shebang" of installing / configuring / setting up Apache/Tomcat/PostreSQL / "coding" the website myself using HTML / JSP / CSS etc. on dedicated servers I'm renting. But in the above case, I'm "owning" the entire stack: from the Debian GNU/Linux dedicated servers to every single file that is served. Now I'd like to do something much simpler and I must admit I don't know what's involved at all. I'd like to host a simple website made of only a few static pages (no database, no nothing) and I'd like it to be accessible from "example.com". What needs to be technically done to have such a thing? How is the DNS supposed to be set up? Note that I do not want to host this on one of my dedicated servers.

    Read the article

  • Discussion of a Distributed Data Storage implementation

    - by fegol
    I want to implement a distributed data storage using a client/server architecture. Each data item will be stored persistently in disk in one of several remote servers. The client uses a library to update and query the data, shielding the client from its actual location. This should allow a client to associate keys (String) to values(byte[]), much as a Map does. The system must ensure that the amount of data stored in each server is approximately the same. The set of servers is known beforehand by other servers and clients. Both the client and the server will be written in Java, using sockets, threads, and files. I open this topic with the objective of discussing the best way to implement this idea, assuming simplicity, what are the issues of this implementation, performance measurements and discussion of the limitations.

    Read the article

  • Google's process for publishing/modifying pages [closed]

    - by Glenn Dayton
    I'm assuming that a group of people at Google have control of certain sections of google.com, but how does Google make sure that employees don't accidentally or intentionally sabotage the website? Does Google use Adobe Contribute or some similar product for sharing/publishing the website. Do employees use WebDAV, FTP, SFTP, or SSH to publish the site. Since Google has hundreds of thousands of servers it probably takes some time for its servers to update. Do they transmit the new copy of the website to all servers before publishing at once? This question does not apply to Google editing a database and having a page reflect the database's changes. It applies to employees editing the source code and/ or back end of the site.

    Read the article

  • Upgrading 8.10 server to LTS

    - by user3215
    I'm in a plan to upgrade ubuntu 8.10 vbox vm servers to LTS(obviously 10.04) as 8.10 has no support. As far as I know I'll be executing the following to upgrade: apt-get install update-manager-core do-release-upgrade anybody could tell me how could I upgrade a ubuntu server from alternate iso image(Is the alternate iso image used for desktop editons the same used for servers?)? I heard it's possible to upgrade an LTS directly to another LTS and how could I do this after upgrading 8.10 to 9.04 then directly to 10.04 skipping 9.10? 8.10 servers are hosting many services/applications/databases like apache2, tomcat6, ldap, mysql, cvs... and I'm not sure that all of them work as ever after the upgrade. If there is any precautions that I've to following before upgrading, please anyone let me know(ofcourse backup and I'm not going to take backup as I will be trying this on a copy of vdi/vmdk vms) Thanks!

    Read the article

  • dig @server doesn't work

    - by JustTrying
    I have Ubuntu 12.04 with BIND9, working just as a caching server (forwarding to 8.8.8.8). When I use, for example, dig +norecurse @l.root-servers.net www.uniroma1.it, I obtain the following output ; << DiG 9.8.1-P1 << +norecurse @l.root-servers.net www.uniroma1.it ; (1 server found) ;; global options: +cmd ;; connection timed out; no servers could be reached Using Wireshark I discovered that the outgoing queries are correct, but there aren't any incoming answers. Why? P.S. Using simply dig www.uniroma1.it I obtain the correct answers.

    Read the article

  • Web hosting announced downtime and how it affects FORWARD domain names?

    - by maple_shaft
    Our web hosting provider that holds our FORWARD domain names announced that at some point in the next couple weeks they will be migrating servers and that this will cause a 5-10 minute downtime at some point in that week during what happens to be our core business hours. They cite for technical reasons it is impossible to give an exact date or time when this downtime will occur. My questions are: If my domains are set to FORWARD to a static IP on servers not hosted by the web hosting provider in question then will this affect the DNS servers correctly routing to my website? Are their legitimate technical reasons for such a wide window of time, or could this just be a blanket statement to cover laziness in not being more organized with their server migrations? Are such downtimes normal for web hosting providers, or should I start to consider other providers?

    Read the article

  • WHM local/external mail server confusion

    - by BWRic
    We host several websites on the same server using WHM but this seems to confuse the mail routing when someone has their own external mail servers - it looks locally. We have our own email accounts hosted on the server. When creating an account for a client on the same server WHM adds the default entries to the DNS for that account. However this client has their own mail servers elsewhere. But when sending them an email it never reaches that external server - it just sees the local, incorrect one. I realise I can update my DNS to point to the external server, but this means I am copying their settings and if they are changed, then I will also need to update mine. Are there some settings I can use to force it to use the external servers without having to copy the settings.

    Read the article

  • How do I debug a cluster running Microsoft server 2003?

    - by alcor
    I'm sole developer of a complex critical software system, written in Visual C++ 2005. It's deployed on a classical Microsoft cluster scenario (active/passive), that has Windows Server 2003 R2. If a server A goes down, the other one (B) starts and take the ownership of its duties. You have to know that: Both servers have the same Microsoft patches/fixes, same hardware, same everything. Both servers use the same memory storage (a RAID-6 through fiber channel). This software has a main module that launches the peripheral modules. if a peripheral module crashes, the main module restarts it. When I switch the application in one of the two servers (let's say the B server) two of the peripheral modules of the main applications just started to crash apparently without reason about 2 seconds after the start of the peripheral module. What could I do to analyze/inspect/resolve this weird situation?

    Read the article

  • I need advice on how to debug a cluster

    - by alcor
    I'm the only developer of a complex critical software system, written in Visual C++ 2005. It's deployed on a classical Microsoft cluster scenario (active/passive), that has Windows Server 2003 R2. If a server A goes down, the other one (B) starts and take the ownership of its duties. You have to know that: both servers have the same Microsoft patches/fixes, same hardware, same everything. both servers use the same memory storage (a RAID-6 through fiber channel). this software has a main module who launch the peripheral modules. if a peripheral module crashes, the main module restarts it. When I switch the application in one of the two servers (let's say the B server) two of the peripheral modules of the main applications just started to crash apparently without reason about 2 seconds after the start of the peripheral module. What could I do to analyze/inspect/resolve this weird situation?

    Read the article

  • Wait random number of minutes

    - by TiborKaraszi
    Why on earth would you want to do that? you ask. Say you have a job that is scheduled to start at the same time over a number of servers. This might be because you have an SQL Server Master/Target server environment (MSX/TSX) or you quite simply script a job and execute that script on several servers. You probably want to spread the load on your SAN and virtual machine host a bit. This is the exact reason I use this procedure. I frequently use MSX servers and I usually add a job step (executing this...(read more)

    Read the article

  • Express5800 to Mesh with SQL Server 2012

    NEC's GX servers have been engineered to supply high performance and availability. At their core is an Intel Xeon E7 processor with the power to handle up to 2TB of memory and 160 threads. In addition, thanks to QuickPath Interconnect technology, GX servers boast as much as a 200 percent in database improvement over their predecessors. The combination of NEC servers with Microsoft SQL Server 2012 give users the necessary capabilities to truly realize the cloud's potential for their needs in a number of ways. Organizations get stable platforms built for enterprise environments that offer hig...

    Read the article

  • Persist changes in C

    - by Mohit Deshpande
    I am developing a database-like application that stores a a structure containing: struct Dictionary { char *key; char *value; struct Dictionary *next; }; As you can see, I am using a linked list to store information. But the problem begins when the user exits out of the program. I want the information to be stored somewhere. So I was thinking of storing the linked list in a permanent or temporary file using fopen, then, when the user starts the program, retrieve the linked list. Here is the method that prints the linked list to the console: void PrintList() { int count = 0; struct Dictionary *current; current = head; if (current == NULL) { printf("\nThe list is empty!"); return; } printf(" Key \t Value\n"); printf(" ======== \t ========\n"); while (current != NULL) { count++; printf("%d. %s \t %s\n", count, current->key, current->value); current = current->next; } } So I am thinking of modifying this method to print the information through fprintf instead of printf and then the program would just get the infomation from the file. Could someone help me on how I can read and write to this file? What kind of file should it be, temporary or regular? How should I format the file (like I was thinking of just having the key first, then the value, then a newline character)?

    Read the article

  • Python import error: Symbol not found, but the symbol <s>is</s> *is not* present in the file

    - by Autopulated
    I get this error when I try to import ssrc.spread: ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/ssrc/_spread.so, 2): Symbol not found: __ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE The file in question (_spread.so) includes the symbol: $ nm _spread.so | grep _ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE U __ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE U __ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE (twice because the file is a fat ppc/x86 binary) EDIT: okay, as James points out, the U means that the symbol is undefined but required by the object file. With some more digging I've noticed (where I should have looked first...) these linker errors during compilation: CC=g++ CXX=g++ g++-4.0 -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -O3 -I../.. -I../.. -I/usr/local/include -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -O2 -I/usr/local/include -std=c++98 -pipe -fno-gnu-keywords -fvisibility-inlines-hidden -o SsrcSpread.o -c SsrcSpread.cc CC=g++ CXX=g++ /bin/sh ../../libtool --tag=CXX --mode=link g++-4.0 -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -bundle -undefined dynamic_lookup -F/Library/Frameworks -framework Python \ -pthread -D_REENTRANT -pedantic -Wall -Wno-long-long -Winline -Woverloaded-virtual -Wold-style-cast -Wsign-promo -L../../ssrc -lssrcspread -L/usr/local/lib -ltspread-core -o _spread.so SsrcSpread.o mkdir .libs g++-4.0 -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -bundle -undefined dynamic_lookup -F/Library/Frameworks -framework Python -pthread -D_REENTRANT -pedantic -Wall -Wno-long-long -Winline -Woverloaded-virtual -Wold-style-cast -Wsign-promo -o _spread.so SsrcSpread.o -Wl,-bind_at_load -L/Dev/libssrcspread-1.0.6/ssrc /Dev/libssrcspread-1.0.6/ssrc/.libs/libssrcspread.a -L/usr/local/lib -ltspread-core ld: warning: in ~/Dev/libssrcspread-1.0.6/ssrc/.libs/libssrcspread.a, file was built for unsupported file format which is not the architecture being linked (ppc) ld: warning: in /Developer/SDKs/MacOSX10.4u.sdk/usr/local/lib/libtspread-core.dylib, file was built for unsupported file format which is not the architecture being linked (ppc) ld: warning: in /Dev/libssrcspread-1.0.6/ssrc/.libs/libssrcspread.a, file was built for unsupported file format which is not the architecture being linked (i386) ld: warning: in /Developer/SDKs/MacOSX10.4u.sdk/usr/local/lib/libtspread-core.dylib, file was built for unsupported file format which is not the architecture being linked (i386) I'm also not entirely sure that the 10.4 sdk is the right one for compiling python modules (but switching to 10.6 didn't seem to help).

    Read the article

  • Dynamic evaluation of a table column within an insert before trigger

    - by Tim Garver
    HI All, I have 3 tables, main, types and linked. main has an id column and 32 type columns. types has id, type linked has id, main_id, type_id I want to create an insert before trigger on the main table. It needs to compare its 32 type columns to the values in the types table if the main table column has an 'X' for its value and insert the main_id and types_id into the linked table. i have done a lot of searching, and it looks like a prepared statement would be the way to go, but i wanted to ask the experts. The issue, is i dont want to write 32 IF statements, and even if i did, i need to query the types table to get the ID for that type, seems like a huge waist of resources. Ideally i want to do this inside of my trigger: BEGIN DECLARE @types results_set -- (not sure if this is a valid type); -- (iam sure my loop syntax is all wrong here)... SET @types = (select * from types) for i=0;i<types.records;i++ { IF NEW.[i.type] = 'X' THEN insert into linked (main_id,type_id) values (new.ID, i.id); END IF; } END; Anyway, This is what i was hoping to do, maybe there is a way to dynamically set the field name inside of a results loop, but i cant find a good example of this. Thanks in advance Tim

    Read the article

  • How to force inclusion of an object file in a static library when linking into executable?

    - by Brian Bassett
    I have a C++ project that due to its directory structure is set up as a static library A, which is linked into shared library B, which is linked into executable C. (This is a cross-platform project using CMake, so on Windows we get A.lib, B.dll, and C.exe, and on Linux we get libA.a, libB.so, and C.) Library A has an init function (A_init, defined in A/initA.cpp), that is called from library B's init function (B_init, defined in B/initB.cpp), which is called from C's main. Thus, when linking B, A_init (and all symbols defined in initA.cpp) is linked into B (which is our desired behavior). The problem comes in that the A library also defines a function (Af, defined in A/Afort.f) that is intended to by dynamically loaded (i.e. LoadLibrary/GetProcAddress on Windows and dlopen/dlsym on Linux). Since there are no references to Af from library B, symbols from A/Afort.o are not included into B. On Windows, we can artifically create a reference by using the pragma: #pragma comment (linker, "/export:_Af") Since this is a pragma, it only works on Windows (using Visual Studio 2008). To get it working on Linux, we've tried adding the following to A/initA.cpp: extern void Af(void); static void (*Af_fp)(void) = &Af; This does not cause the symbol Af to be included in the final link of B. How can we force the symbol Af to be linked into B?

    Read the article

  • Best implementation of Java Queue?

    - by Georges Oates Larsen
    I am working (In java) on a recursive image processing algorithm that recursively traverses the pixels of the image, outward from a center point. Unfortunately... That causes stack overflows, so I have decided to switch to a Queue-based algorithm. Now, this is all fine and dandy -- But considering the fact that its queue will be analyzing THOUSANDS of pixels in a very short amount of time, while constantly popping and pushing, WITHOUT maintaining a predictable state (It could be anywhere between length 100, and 20000); The queue implementation needs to have significantly fast popping and pushing abilities. A linked list seems attractive due to its ability to push elements unto its self without rearranging anything else in the list, but in order for it to be fast enough, it would need easy access to both its head, AND its tail (or second-to-last node if it were not doubly-linked). Sadly, though I cannot find any information related to the underlying implementation of linked lists in Java, so it's hard to say if a linked list is really the way to go... This brings me to my question... What would be the best implementation of the Queue interface in Java for what I intend to do? (I do not wish to edit or even access anything other than the head and tail of the queue -- I do not wish to do any sort of rearranging, or anything. On the flip side, I DO intend to do a lot of pushing and popping, and the queue will be changing size quite a bit, so preallocating would be inefficient)

    Read the article

  • Facebook - generating fb tag in Jquery - not working

    - by Gublooo
    Hey guys Kind of stuck here - I have this functionality on my site where I display 10 user comments along with their photo and when a user clicks on "More" - using Jquery more results are fetched from DB and the results are displayed via string generated in the Jquery method. This is a shortened version of my Jquery function $(".more_swipes").live('click',function() { var profile_id = 'profile-user_id; ?'; $.getJSON("/profile/more-swipes", { user_id: profile_id},function(swipes) { $.each(swipes, function(i,data){ newcomment="<div"; newcomment +="<fb:profile-pic uid='"+data.fb_userid+"' linked='false'/"; newcomment +="Name="+data.user_name+"-Comment="+data.comment; $("#moreswipes").append(newcomment); }); }); return false; }); Here as you can see I'm using the tag <fb:profile-pic uid='"+data.fb_userid+"' linked='false'/ Now when more results get displayed, the profile pic of the user is not getting displayed - when I look at the source code - this is how the fb tag looks when its generated thru Jquery <fb:profile-pic uid="222222" linked="false" height="50" width="50" Whereas the FB tags not generated thru the Jquery code look like <fb:profile-pic uid="222222" linked="false" height="50" width="50" class="FB_profile_pic <fb_profile_pic_rendered FB_ElementReady" style="width:px;height:50px" <img src="http://profile.atk......2025.jpg" alt="User Name" title="User Name" style="width:px;height:50px" class="FB_profile_pic"/ </fb:profile-pic So when I add it as a String in JQuery function - facebook is not identifying it and executing it Any idea how to fix this Thanks a bunch

    Read the article

  • how run Access 2007 module in Vb6?

    - by Mahmoud
    I have created a module in access 2007 that will update linked tables, but I wanted to run this module from vb6. I have tried this code from Microsoft, but it didnt work. Sub AccessTest1() Dim A As Object Set A = CreateObject("Access.Application") A.Visible = False A.OpenCurrentDatabase (App.Path & "/DataBase/acc.accdb") A.DoCmd.RunMacro "RefreshLinks" End Sub What I am aiming to do, is to allow my program to update all linked tables to new links, in case the program has been used on other computer In case you want to take a look at the module program, here it is: Sub CreateLinkedJetTable() Dim cat As ADOX.Catalog Dim tbl As ADOX.Table Set cat = New ADOX.Catalog ' Open the catalog. cat.ActiveConnection = CurrentProject.Connection Set tbl = New ADOX.Table ' Create the new table. tbl.Name = "Companies" Set tbl.ParentCatalog = cat ' Set the properties to create the link. tbl.Properties("Jet OLEDB:Link Datasource") = CurrentProject.Path & "/db3.mdb" tbl.Properties("Jet OLEDB:Remote Table Name") = "Companies" tbl.Properties("Jet OLEDB:Create Link") = True ' To link a table with a database password set the Link Provider String ' tbl.Properties("Jet OLEDB:Link Provider String") = "MS Access;PWD=Admin;" ' Append the table to the tables collection. cat.Tables.Append tbl Set cat = Nothing End Sub Sub RefreshLinks() Dim cat As ADOX.Catalog Dim tbl As ADOX.Table Set cat = New ADOX.Catalog ' Open the catalog. cat.ActiveConnection = CurrentProject.Connection Set tbl = New ADOX.Table For Each tbl In cat.Tables ' Verify that the table is a linked table. If tbl.Type = "LINK" Then tbl.Properties("Jet OLEDB:Link Datasource") = CurrentProject.Path & "/db3.mdb" ' To refresh a linked table with a database password set the Link Provider String 'tbl.Properties("Jet OLEDB:Link Provider String") = "MS Access;PWD=Admin;" End If Next End Sub

    Read the article

  • initalizing two pointers to same value in "for" loop

    - by MCP
    I'm working with a linked list and am trying to initalize two pointers equal to the "first"/"head" pointer. I'm trying to do this cleanly in a "for" loop. The point of all this being so that I can run two pointers through the linked list, one right behind the other (so that I can modify as needed)... Something like: //listHead = main pointer to the linked list for (blockT *front, *back = listHead; front != NULL; front = front->next) //...// back = back->next; The idea being I can increment front early so that it's one ahead, doing the work, and not incrementing "back" until the bottom of the code block in case I need to backup in order to modify the linked list... Regardless as to the "why" of this, in addition to the above I've tried: for (blockT *front = *back = listHead; /.../ for (blockT *front = listHead, blockT *back = listHead; /.../ I would like to avoid pointer to a pointer. Do I just need to initialize these before the loop? As always, thanks!

    Read the article

  • Designing an email system to guarantee delivery

    - by GlenH7
    We are looking to expand our use of email for notification purposes. We understand it will generate more inbox volume, but we are being selective about which events we fire notification on in order to keep the signal-to-noise ratio high. The big question we are struggling with is designing a system that guarantees that the email was delivered. If an email isn't delivered, we will consider that an exception event that needs to be investigated. In reality, I say almost guarantees because there aren't any true guarantees with email. We're just looking for a practical solution to making sure the email got there and experiences others have had with the various approaches to guaranteeing delivery. For the TL;DR crowd - how do we go about designing a system to guarantee delivery of emails? What techniques should we consider so we know the emails were delivered? Our biggest area of concern is what techniques to use so that we know when a message is sent out that it either lands in an inbox or it failed and we need to do something else. Additional requirements: We're not at the stage of including an escalation response, but we'll want that in the future or so we think. Most notifications will be internal to our enterprise, but we will have some notifications being sent to external clients. Some of our application is in a hosted environment. We haven't determined if those servers can access our corporate email servers for relaying or if they'll be acting as their own mail servers. Base design / modules (at the moment): A module to assign tracking identification A module to send out emails A module to receive delivery notification (perhaps this is the same as the email module) A module that checks sent messages against delivery notification and alerts on undelivered email. Some references: Atwood: Send some email Email Tracking Some approaches: Request a response (aka read-receipt or Message Disposition Notification). Seems prone to failure since we have cross-compatibility issues due to differing mail servers and software. Return receipt (aka Delivery Status Notification). Not sure if all mail servers honor this request or not Require an action and therefore prove reply. Seems burdensome to force the recipients to perform an additional task not related to resolving the issue. And no, we haven't come up with a way of linking getting the issue fixed to whether or not the email was received. Force a click-through / Other site sign-in. Similar to requiring some sort of action, this seems like an additional burden and will annoy the users. On the other hand, it seems the most likely to guarantee someone received the notification. Hidden image tracking. Not all email providers automatically load the image, and how would we associate the image(s) with the email tracking ID? Outsource delivery. This gets us out of the email business, but goes back to how to guarantee the out-sourcer's receipt and subsequent delivery to the end recipient. As a related concern, there will be an n:n relationship between issue notification and recipients. The 1 issue : n recipients subset isn't as much of a concern although if we had a delivery failure we would want to investigate and fix the core issue. Of bigger concern is n issues : 1 recipient, and we're specifically concerned in making sure that all n issues were received by the recipient. How does forum software or issue tracking software handle this requirement? If a tracking identifier is used, Where is it placed in the email? In the Subject, or the Body?

    Read the article

  • du excluding hard links possible?

    - by balor123
    I'm trying to determine how big a cloned Git repository is from a local file system. It creates hard links for some but not all files. How can I determine the disk usage of it? The best I can come up with is "du -a" right now with the original and again with the clone to determine the difference, since each hard linked file will be counted only once. Ideally, I would just run du on the clone and count each hard linked file zero times.

    Read the article

  • Google Indexed an Unlinked Page

    - by Yar
    Google indexed a page on a site of mine that was not linked from any other page, ever. No one has ever put a link to it, and the directory contents were not browsable. How could this happen? I thought crawlers have no way to include a page that is not linked.

    Read the article

  • Using Solaris zfs + iscsi targets with Oracle VM

    - by wim.coekaerts
    I was playing with my Oracle VM setup and needed some shared storage that was block based. I did not have a storage array available but I did have a solaris box, that I use for Oracle VDI, available. I set up a few iscsi targets on this solaris server and exported them to my 2 Oracle VM servers. Here's how I did this : (1) On the solaris side : # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT rpool 544G 129G 415G 23% ONLINE - I just have a simple zpool, called rpool, on this box. It has plenty of space available for my needs. So I will use rpool and I will create 5 50gb vols : zfs create -V 50G rpool/ovm1 zfs create -V 50G rpool/ovm2 zfs create -V 50G rpool/ovm3 zfs create -V 50G rpool/ovm4 zfs create -V 50G rpool/ovm5 I want to use these volumes for iscsi so I have to enable them as shared iscsi devices : zfs set shareiscsi=on rpool/ovm1 zfs set shareiscsi=on rpool/ovm2 zfs set shareiscsi=on rpool/ovm3 zfs set shareiscsi=on rpool/ovm4 zfs set shareiscsi=on rpool/ovm5 The command iscsitadm list target should list these devices so make sure they show up. # iscsitadm list target Target: rpool/ovm1 iSCSI Name: iqn.1986-03.com.sun:02:896c766c-0943-4da5-d47e-9575b5a0be36 Connections: 2 Target: rpool/ovm2 iSCSI Name: iqn.1986-03.com.sun:02:a3116b46-73e0-e8c2-e80c-9a4f71aff069 Connections: 2 Target: rpool/ovm3 iSCSI Name: iqn.1986-03.com.sun:02:a838c400-2730-c0d6-f2c2-ee186a0261c1 Connections: 2 Target: rpool/ovm4 iSCSI Name: iqn.1986-03.com.sun:02:2e046afb-d66d-4f3f-c5de-8115e0ddd931 Connections: 2 Target: rpool/ovm5 iSCSI Name: iqn.1986-03.com.sun:02:66109fbe-81ac-ef05-f85e-ab8c1f34cb43 Connections: 2 At this point I want to make sure that I have some access control on these devices. To make it easier, I will create an alias for my 2 servers and use the alias for the ACL. get the iqn from the 2 servers on my 2 ovm servers (wcoekaer-srv1, wcoekaer-srv2) get the content of /etc/iscsi/initiatorname.iscsi (for each server) InitiatorName=iqn.1986-03.com.sun:01:2a7526f0ffff On the solaris side create the aliases : iscsitadm create initiator -n iqn.1986-03.com.sun:01:2a7526f0ffff wcoekaer-srv1 iscsitadm create initiator -n iqn.1986-03.com.sun:01:e31b08110f1 wcoekaer-srv5 Add the ACL to the targets : iscsitadm modify target -l wcoekaer-srv1 rpool/ovm1 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm2 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm3 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm4 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm5 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm1 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm2 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm3 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm4 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm5 (2) the Oracle VM side On each server just do 2 simple things : # iscsiadm -m discovery -t sendtargets -p ca-vdi1 where ca-vdi1 is my solaris server name # service iscsi restart When I do cat /proc/partitions on my servers I will see the devices show up # cat /proc/partitions major minor #blocks name 8 0 160836480 sda 8 1 104391 sda1 8 2 3148740 sda2 8 3 1052257 sda3 253 0 6377804 dm-0 253 1 6377804 dm-1 253 2 6377804 dm-2 8 16 52428800 sdb 8 32 52428800 sdc 8 48 52428800 sdd 8 80 52428800 sdf 8 64 52428800 sde These 5 new devices sd[b..f] are shared storage for Oracle VM and can be used to pass through to the VM's as phy: devices or put ocfs2 on it and use as shared filesystem storage for dom0 repositories. I am setting up an 11gR2 rac template (the cool stuff Saar did) so I am using my devices to create a 2 node RAC cluster with phy: devices.

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >