Search Results

Search found 15323 results on 613 pages for 'db2 connect'.

Page 127/613 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • TCP dies on a Linux laptop

    - by Roman Cheplyaka
    Once in several days I have the following problem. My laptop (Debian GNU/Linux testing) suddenly becomes unable to work with TCP connections to the internet. The following things continue to work fine: UDP (DNS), ICMP (ping) — I get instant response TCP connections to other machines in the local network (e.g. I can ssh to a neighbour laptop) everything is ok for other machines in my LAN But when I try TCP connections from my laptop, they time out (no response to SYN packets). Here's a typical curl output: % curl -v google.com * About to connect() to google.com port 80 (#0) * Trying 173.194.39.105... * Connection timed out * Trying 173.194.39.110... * Connection timed out * Trying 173.194.39.97... * Connection timed out * Trying 173.194.39.102... * Timeout * Trying 173.194.39.98... * Timeout * Trying 173.194.39.96... * Timeout * Trying 173.194.39.103... * Timeout * Trying 173.194.39.99... * Timeout * Trying 173.194.39.101... * Timeout * Trying 173.194.39.104... * Timeout * Trying 173.194.39.100... * Timeout * Trying 2a00:1450:400d:803::1009... * Failed to connect to 2a00:1450:400d:803::1009: Network is unreachable * Success * couldn't connect to host * Closing connection #0 curl: (7) Failed to connect to 2a00:1450:400d:803::1009: Network is unreachable Restarting the connection and/or reloading the network card kernel module doesn't help. The only thing that helps is reboot. Clearly something is wrong with my system (everything else works fine), but I have no idea what exactly. I don't know how to reproduce this, but as I said, it happens every several days. My setup is a wireless router that is connected to the ISP via PPPoE. Any advice?

    Read the article

  • How can one restrict network activity to only the VPN on a Mac and prevent unsecured internet activity?

    - by John
    I'm using Mac OS and connect to a VPN to hide my location and IP (I have the 'send all traffic over VPN connection' box checked in teh Network system pref), I wish to remain anonymous and do not wish to reveal my actual IP, hence the VPN. I have a prefpan called pearportVPN that automatically connects me to my VPN when I get online. The problem is, when I connect to the internet using Airport (or other means) I have a few seconds of unsecured internet connection before my Mac logs onto my VPN. Therefore its only a matter of time before I inadvertently expose my real IP address in the few seconds it takes between when I connect to the internet and when I log onto my VPN. Is there any way I can block any traffic to and from my Mac that does not go through my VPN, so that nothing can connect unless I'm logged onto my VPN? I suspect I would need to find a third party app that would block all traffic except through the Server Address, perhaps Intego Virus Barrier X6 or little snitch, but I'm afraid I'm not sure which is right or how to configure them. Any help would be much appreciated. Thanks!

    Read the article

  • How can one restrict network activity to only the VPN on a Mac and prevent unsecured internet activity?

    - by John
    I'm using Mac OS and connect to a VPN to hide my location and IP (I have the 'send all traffic over VPN connection' box checked in teh Network system pref), I wish to remain anonymous and do not wish to reveal my actual IP, hence the VPN. I have a prefpan called pearportVPN that automatically connects me to my VPN when I get online. The problem is, when I connect to the internet using Airport (or other means) I have a few seconds of unsecured internet connection before my Mac logs onto my VPN. Therefore its only a matter of time before I inadvertently expose my real IP address in the few seconds it takes between when I connect to the internet and when I log onto my VPN. Is there any way I can block any traffic to and from my Mac that does not go through my VPN, so that nothing can connect unless I'm logged onto my VPN? I suspect I would need to find a third party app that would block all traffic except through the Server Address, perhaps Intego Virus Barrier X6 or little snitch, but I'm afraid I'm not sure which is right or how to configure them. Any help would be much appreciated. Thanks!

    Read the article

  • Can't access YouTube

    - by Agentleader1
    I can not seem to connect to YouTube at all. If I connect directly to YouTube (youtube.com) I get this: And if I try to connect via video directly (youtube.com/watch?v=), I get this: Here's how this isn't a web issue: I have malware on my computer I believe. And the question here is, how do I get rid of it, or what possible issue could this be? I can verify this is not a website issue or wifi issue, I've tried to connect on another computer in my wifi, and it worked. It is the local machine issue. I've tried to get rid of malware at my best and also have tried to disable possible virus extensions. All can out as the same result: no help. Also, I am unable to find these hiding viruses. I used malware-bytes and Microsoft Security Essentials. Edit: This is OBVIOUSLY Windows. I am currently running on a laptop Windows 7 Home Premium 4gb ram 64-bit os Lifebook series manufactured by Fujitsu America, Inc. Intel(R) Core(TM) i3-3110M CPU @ 2.40GHz nslookup youtube.com:

    Read the article

  • php extensions & apache mods gone/not working after server restart?

    - by user1782359
    I was wondering if anyone has ever come across this before, as I'm pretty stumped to be honest, and my server admin knowledge isn't particular good so I'm not sure what could even be wrong, let alone how to fix it. Basically, Thursday last week everything was fine on our server. I come in on Friday and it's a mess: php extensions are missing/not working, apache modules are gone. (e.g. oci_* was gone completely, odbc_ not working but still there, the apache ntlm_auth for single sign on was gone and so the website wasn't even loading in IE). I'm ruling out anything deliberate because it's just incredibly unlikely. The only thing that really happened between thursday & friday is that on thursday evening one of the network guys did a RAM upgrade on the server and restarted it. That's it, nothing else. Now I'm wondering if somehow those extensions and such which we installed months ago were somehow only saved in a local memory of sorts, and a restart has wiped them? But we installed them all as root, so I don't see why it should be any different from installing anything else. It makes little/no sense to me. To expand on an example of something that's gone very wrong, the php odbc_ extension: It's still on the server, it doesn't return undefined function or anything. But it just cannot connect to the datasource any more. I've tested it through the command line and it's working perfectly fine with that datasource and login details, but all of a sudden having it in the php odbc_connect() function and it just can't connect. ( [S1000][unixODBC][FreeTDS][SQL Server]Unable to connect to data source. ) But unixODBC is set up fine. Like I say i've tested it all through the terminal and it can connect, and we've not changed anything, it's just now all of a sudden not working through the PHP function. Anyone have any ideas whatsoever as to what could be going on? This is on CentOS 5.x by the way.

    Read the article

  • SSH & SFTP: Should I assign one port to each user to facilitate bandwidth monitoring?

    - by BertS
    There is no easy way to track real-time per-user bandwidth usage for SSH and SFTP. I think assigning one port to each user may help. Idea of implementation Use case Bob, with UID 1001, shall connect on port 31001. Alice, with UID 1002, shall connect on port 31002. John, with UID 1003, shall connect on port 31003. (I do not want to lauch several sshd instances as proposed in question 247291.) 1. Setup for SFTP: In /etc/ssh/sshd_config: Port 31001 Port 31002 Port 31003 Subsystem sftp /usr/bin/sftp-wrapper.sh The file sftp-wrapper.sh starts the sftp server only if the port is the correct one: #!/bin/sh mandatory_port=3`id -u` current_port=`echo $SSH_CONNECTION | awk '{print $4}'` if [ $mandatory_port -eq $current_port ] then exec /usr/lib/openssh/sftp-server fi 2. Additional setup for SSH: A few lines in /etc/profile prevents the user from connecting on the wrong port: if [ -n "$SSH_CONNECTION" ] then mandatory_port=3`id -u` current_port=`echo $SSH_CONNECTION | awk '{print $4}'` if [ $mandatory_port -ne $current_port ] then echo "Please connect on port $mandatory_port." exit 1 fi fi Benefits Now it should be easy to monitor per-user bandwidth usage. A Rrdtool-based application could produce charts like this: I know this won't be a perfect calculation of the bandwidth usage: for example, if somebody launches a bruteforce attack on port 31001, there will be a lot of traffic on this port although not from Bob. But this is not a problem to me: I do not need an exact computation of per-user bandwidth usage, but an indicator that is approximately correct in standard situations. Questions Is the idea of assigning one port for each user is a good one? Is the proposed setup an reliable one? If I have to open dozens of ports for many users, should I expect a performance drawback? Do you know a rrdtool-based application which could make the chart above?

    Read the article

  • securing communication between 2 Linux servers on local network for ports only they need access to

    - by gkdsp
    I have two Linux servers connected to each other via a cross-connect cable, forming a local network. One of the servers presents a DMZ for the other server (e.g. database server) that must be very secure. I'm restricting this question to communication between the two servers for ports that only need to be available to these servers (and no one else). Thus, communication between the two servers can be established by: (1) opening the required port(s) on both servers, and authenticating according to the applications' rules. (2) disabling IP Tables associated with the NIC cards the cross-connect cable is attached to (on both servers). Which method is more secure? In the first case, the needed ports are open to the external world, but protected by user name and password. In the second case, none of the needed ports are open to the outside world, but since the IP Tables are disabled for the NIC cards associated with the cross-connect cables, essentially all of the ports may be considered to be "open" between the two servers (and so if the server creating the DMZ is compromized, the hacker on the DMZ server could view all ports open using the cross-connect cable). Any conventional wisdom how to make the communication secure between two servers for ports only these servers need access to?

    Read the article

  • how to build network across buildings ?

    - by Omie
    Hi ! I need some help in building a network between hundreds of computers spread across multiple buildings of my college. Yes, I'll be doing this as a part of my college project. Please see this image, it will give you enough idea of what I'm trying to achieve. http://i.imgur.com/rOohx.png All the computers in all buildings should be able to connect server. Once network is up, there will be a set of services over intranet and network use will be moderate. well, say there will be an email server and a http server. My point is, I cannot afford much of performance loss. It feels easy to connect computers inside 1 building to each other, however, I'm clueless as to how to connect all of them to server. I mean, just 1 cable won't be enough to connect 1 building to server, right ? How should I go with it ? I am not expecting detailed configuration. Just heads up will do :) Thanks

    Read the article

  • Cannot access Domain Controller through VPN

    - by Markus
    In our small network there is a Windows 2008 R2 Domain Controller that also serves as Remote Access Server. For years, we could access this server and the resources in the network over a VPN connection without any problem. Since some time however, I am able to connect to the VPN, but my Windows 8 client (and another one I used for testing purposes) is not able to connect the domain controller afterwards. I can access any other server in the network, but there seems to be a problem regarding the trust between the client(s) and the server. If I connect the client to the network directly over a LAN cable, everything works as expected. Also I can connect to another server over VPN and open a RDP prompt to the DC without a problem. On the client, whenever I try to access the DC, I get an access denied message. I've tried to update the group policies both over VPN and LAN. Also, I've removed the client from the domain and re-added it. The client shows a message that Windows requires valid login information when connected to the VPN - but my credentials are valid. They work when I logon to the client when not connected to the VPN and also when connected to the LAN. Turning off the firewall on the client and the server did not change anything. DNS resolution works both on the server and the client. What else can I do to diagnose and solve the problem?

    Read the article

  • very slow connection to ssh server from client (but not other servers)

    - by AntonOfTheWoods
    I have an Ubuntu 12.04 laptop that is taking so long to connect to various servers (in different data centres) that it seems like a bit of a lottery whether I'll actually get a connection. If I connect to the servers between themselves it's instantaneous, and I've set UseDNS no AddressFamily inet On the servers I'm connecting to (and rebooted for good measure). I also put in the reverse DNS+IP of the cable connection I'm connecting from. If I connect from the laptop via telnet: telnet my.server 22 Then the connection is also instantaneous, so it doesn't appear to be a problem with an intervening firewall. I have the same behaviour whether I connect with the IP, a short name in my hosts or the FQDN. I'm connecting with a 50mbps (cable, sync) connection so that doesn't appear to be the problem, and when I do finally get a connection then it's a good, quick, stable one. I have tried listening on another port (8000) and that makes no difference. Web and other connections from the laptop to the machine are also very good. Does anyone have any ideas here?

    Read the article

  • [openVPN] server & client on same machine . And multiple VPN servers

    - by HiWorld
    Hello everyone, im stucked configuring openvpn to build a multi vpn connection. like this: CLIENT - VPN1 - VPN2 - INTERNET Well, i already have and know how to done a normal sigle vpn but want to use a chain of vpns, so i explain what i have done and how i did it. ON VPN1. i have 1 openvpn instance running as server( where client connect to) and another as client connecting to VPN2 running as server. { Here comes the problem } when i connect VPN1 as client of VPN2 i cant connect to VPN1 from CLIENT, my question is HOW TO procced with this... Also have another third instance working as server to use VPN1 without chains. ON VPN2. 1 openvpn instance as server where VPN1 will connect and then forward to the NET. Im using TUN interface on configs. And iptables are on this way: VPN1 - openvpn ip server1 : 192.168.6.0 / ip as client of VPN2: 192.168.5.70 iptables -t nat -A POSTROUTING -s 192.168.6.0 -j SNAT --to-source 192.168.5.70 VPN2 - openvpn ip server2 : 192.168.5.0 iptables -t nat -A POSTROUTING -s 192.168.5.0/24 -j SNAT --to-source EXTERNAL_IP_TO_INTERNET Hope someone help me with this. thanks in advance

    Read the article

  • using virtual machine like mySql server

    - by ffmm
    i'm developing a java program and i need a database. Now i'm using MAMP and it's pretty easy but i would have a virtual machine (ubuntu server) and i need to connect my java program with this virtual machine using vitualBox. the situation: I installed VirtualBox on my mac and I installed an ubuntu-server machine set "bridge adapter" in the network settings of VB I installed mysql on ubuntu-server and i created a simple database (all work well by ubuntu) doing ifconfig by ubuntu I get the ip: 192.168.1.217 so in the java program i made this function: public static Connection connect(String host, int port, String dbName, String user, String passwd) { Connection dbConnection = null; try { String dbString = null; Class.forName("com.mysql.jdbc.Driver").newInstance(); dbString = "jdbc:mysql://" + host + ":" + port + "/" + dbName; dbConnection = DriverManager.getConnection(dbString, user, passwd); } catch (Exception e) { System.err.println("Failed to connect with the DB"); e.printStackTrace(); } return dbConnection; } and in the main() i use: Connection con = connect(1, "192.168.1.217", 3306, "Ciao", "root", "cocacola"); 3306 was a default value. I don't know if is correct, it works on mamp, but…. how I can find the correct port that I have to use with VB? when I ran the program I get the catch excepion… what's wrong? ps: i have to install apache o something else?

    Read the article

  • Set up layer 2 vlan between 2 data centres

    - by user41679
    Hello, Our data centre provider operates 2 sites, and we currently have equipment in one and would like to have equipment in the second. They've told me that they operate a layer 2 vlan between the 2 sites over a 20gbit connection, and that they'd just give me ethernet cable at each end to connect the locations. At the current site, we have Cisco 2960 48TC-L switches, all the machines are on a 192.168.x.x subnet and we have cisco firewalls with which we connect to our internet provider with. My question is what would I need to do to connect the 2 sites? could I just plug the ethernet cables the provide into the cisco switches, and have the same switches the other end? would I need to set up a separate internal network on the other side and connect both through the firewalls? Would the cisco switches need special configuration? We expect to maintain a number of connections between the 2 sites, and each site would have its own internal dns name like dc1.xx.com. Sorry if I'm being vague or haven't included enough information, I've a fairly good knowledge of hardware but we're down a netops guy at the moment and I'd like to get both sites on-line ASAP! Thanks in advance!

    Read the article

  • C++ - Conway's Game of Life & Stepping Backwards

    - by Gabe
    I was able to create a version Conway's Game of Life that either stepped forward each click, or just ran forward using a timer. (I'm doing this using Qt.) Now, I need to be able to save all previous game grids, so that I can step backwards by clicking a button. I'm trying to use a stack, and it seems like I'm pushing the old gridcells onto the stack correctly. But when I run it in QT, the grids don't change when I click BACK. I've tried different things for the last three hours, to no avail. Any ideas? gridwindow.cpp - My problem should be in here somewhere. Probably the handleBack() func. #include <iostream> #include "gridwindow.h" using namespace std; // Constructor for window. It constructs the three portions of the GUI and lays them out vertically. GridWindow::GridWindow(QWidget *parent,int rows,int cols) : QWidget(parent) { QHBoxLayout *header = setupHeader(); // Setup the title at the top. QGridLayout *grid = setupGrid(rows,cols); // Setup the grid of colored cells in the middle. QHBoxLayout *buttonRow = setupButtonRow(); // Setup the row of buttons across the bottom. QVBoxLayout *layout = new QVBoxLayout(); // Puts everything together. layout->addLayout(header); layout->addLayout(grid); layout->addLayout(buttonRow); setLayout(layout); } // Destructor. GridWindow::~GridWindow() { delete title; } // Builds header section of the GUI. QHBoxLayout* GridWindow::setupHeader() { QHBoxLayout *header = new QHBoxLayout(); // Creates horizontal box. header->setAlignment(Qt::AlignHCenter); this->title = new QLabel("CONWAY'S GAME OF LIFE",this); // Creates big, bold, centered label (title): "Conway's Game of Life." this->title->setAlignment(Qt::AlignHCenter); this->title->setFont(QFont("Arial", 32, QFont::Bold)); header->addWidget(this->title); // Adds widget to layout. return header; // Returns header to grid window. } // Builds the grid of cells. This method populates the grid's 2D array of GridCells with MxN cells. QGridLayout* GridWindow::setupGrid(int rows,int cols) { isRunning = false; QGridLayout *grid = new QGridLayout(); // Creates grid layout. grid->setHorizontalSpacing(0); // No empty spaces. Cells should be contiguous. grid->setVerticalSpacing(0); grid->setSpacing(0); grid->setAlignment(Qt::AlignHCenter); for(int i=0; i < rows; i++) //Each row is a vector of grid cells. { std::vector<GridCell*> row; // Creates new vector for current row. cells.push_back(row); for(int j=0; j < cols; j++) { GridCell *cell = new GridCell(); // Creates and adds new cell to row. cells.at(i).push_back(cell); grid->addWidget(cell,i,j); // Adds to cell to grid layout. Column expands vertically. grid->setColumnStretch(j,1); } grid->setRowStretch(i,1); // Sets row expansion horizontally. } return grid; // Returns grid. } // Builds footer section of the GUI. QHBoxLayout* GridWindow::setupButtonRow() { QHBoxLayout *buttonRow = new QHBoxLayout(); // Creates horizontal box for buttons. buttonRow->setAlignment(Qt::AlignHCenter); // Clear Button - Clears cell; sets them all to DEAD/white. QPushButton *clearButton = new QPushButton("CLEAR"); clearButton->setFixedSize(100,25); connect(clearButton, SIGNAL(clicked()), this, SLOT(handlePause())); // Pauses timer before clearing. connect(clearButton, SIGNAL(clicked()), this, SLOT(handleClear())); // Connects to clear function to make all cells DEAD/white. buttonRow->addWidget(clearButton); // Forward Button - Steps one step forward. QPushButton *forwardButton = new QPushButton("FORWARD"); forwardButton->setFixedSize(100,25); connect(forwardButton, SIGNAL(clicked()), this, SLOT(handleForward())); // Signals to handleForward function.. buttonRow->addWidget(forwardButton); // Back Button - Steps one step backward. QPushButton *backButton = new QPushButton("BACK"); backButton->setFixedSize(100,25); connect(backButton, SIGNAL(clicked()), this, SLOT(handleBack())); // Signals to handleBack funciton. buttonRow->addWidget(backButton); // Start Button - Starts game when user clicks. Or, resumes game after being paused. QPushButton *startButton = new QPushButton("START/RESUME"); startButton->setFixedSize(100,25); connect(startButton, SIGNAL(clicked()), this, SLOT(handlePause())); // Deletes current timer if there is one. Then restarts everything. connect(startButton, SIGNAL(clicked()), this, SLOT(handleStart())); // Signals to handleStart function. buttonRow->addWidget(startButton); // Pause Button - Pauses simulation of game. QPushButton *pauseButton = new QPushButton("PAUSE"); pauseButton->setFixedSize(100,25); connect(pauseButton, SIGNAL(clicked()), this, SLOT(handlePause())); // Signals to pause function which pauses timer. buttonRow->addWidget(pauseButton); // Quit Button - Exits program. QPushButton *quitButton = new QPushButton("EXIT"); quitButton->setFixedSize(100,25); connect(quitButton, SIGNAL(clicked()), qApp, SLOT(quit())); // Signals the quit slot which ends the program. buttonRow->addWidget(quitButton); return buttonRow; // Returns bottom of layout. } /* SLOT method for handling clicks on the "clear" button. Receives "clicked" signals on the "Clear" button and sets all cells to DEAD. */ void GridWindow::handleClear() { for(unsigned int row=0; row < cells.size(); row++) // Loops through current rows' cells. { for(unsigned int col=0; col < cells[row].size(); col++) // Loops through the rows'columns' cells. { GridCell *cell = cells[row][col]; // Grab the current cell & set its value to dead. cell->setType(DEAD); } } } /* SLOT method for handling clicks on the "start" button. Receives "clicked" signals on the "start" button and begins game simulation. */ void GridWindow::handleStart() { isRunning = true; // It is running. Sets isRunning to true. this->timer = new QTimer(this); // Creates new timer. connect(this->timer, SIGNAL(timeout()), this, SLOT(timerFired())); // Connect "timerFired" method class to the "timeout" signal fired by the timer. this->timer->start(500); // Timer to fire every 500 milliseconds. } /* SLOT method for handling clicks on the "pause" button. Receives "clicked" signals on the "pause" button and stops the game simulation. */ void GridWindow::handlePause() { if(isRunning) // If it is running... this->timer->stop(); // Stops the timer. isRunning = false; // Set to false. } void GridWindow::handleForward() { if(isRunning); // If it's running, do nothing. else timerFired(); // It not running, step forward one step. } void GridWindow::handleBack() { std::vector<std::vector<GridCell*> > cells2; if(isRunning); // If it's running, do nothing. else if(backStack.empty()) cout << "EMPTYYY" << endl; else { cells2 = backStack.peek(); for (unsigned int f = 0; f < cells.size(); f++) // Loop through cells' rows. { for (unsigned int g = 0; g < cells.at(f).size(); g++) // Loop through cells columns. { cells[f][g]->setType(cells2[f][g]->getType()); // Set cells[f][g]'s type to cells2[f][g]'s type. } } cout << "PRE=POP" << endl; backStack.pop(); cout << "OYYYY" << endl; } } // Accessor method - Gets the 2D vector of grid cells. std::vector<std::vector<GridCell*> >& GridWindow::getCells() { return this->cells; } /* TimerFired function: 1) 2D-Vector cells2 is declared. 2) cells2 is initliazed with loops/push_backs so that all its cells are DEAD. 3) We loop through cells, and count the number of LIVE neighbors next to a given cell. --> Depending on how many cells are living, we choose if the cell should be LIVE or DEAD in the next simulation, according to the rules. -----> We save the cell type in cell2 at the same indice (the same row and column cell in cells2). 4) After check all the cells (and save the next round values in cells 2), we set cells's gridcells equal to cells2 gridcells. --> This causes the cells to be redrawn with cells2 types (white or black). */ void GridWindow::timerFired() { backStack.push(cells); std::vector<std::vector<GridCell*> > cells2; // Holds new values for 2D vector. These are the next simulation round of cell types. for(unsigned int i = 0; i < cells.size(); i++) // Loop through the rows of cells2. (Same size as cells' rows.) { vector<GridCell*> row; // Creates Gridcell* vector to push_back into cells2. cells2.push_back(row); // Pushes back row vectors into cells2. for(unsigned int j = 0; j < cells[i].size(); j++) // Loop through the columns (the cells in each row). { GridCell *cell = new GridCell(); // Creates new GridCell. cell->setType(DEAD); // Sets cell type to DEAD/white. cells2.at(i).push_back(cell); // Pushes back the DEAD cell into cells2. } // This makes a gridwindow the same size as cells with all DEAD cells. } for (unsigned int m = 0; m < cells.size(); m++) // Loop through cells' rows. { for (unsigned int n = 0; n < cells.at(m).size(); n++) // Loop through cells' columns. { unsigned int neighbors = 0; // Counter for number of LIVE neighbors for a given cell. // We know check all different variations of cells[i][j] to count the number of living neighbors for each cell. // We check m > 0 and/or n > 0 to make sure we don't access negative indexes (ex: cells[-1][0].) // We check m < size to make sure we don't try to access rows out of the vector (ex: row 5, if only 4 rows). // We check n < row size to make sure we don't access column item out of the vector (ex: 10th item in a column of only 9 items). // If we find that the Type = 1 (it is LIVE), then we add 1 to the neighbor. // Else - we add nothing to the neighbor counter. // Neighbor is the number of LIVE cells next to the current cell. if(m > 0 && n > 0) { if (cells[m-1][n-1]->getType() == 1) neighbors += 1; } if(m > 0) { if (cells[m-1][n]->getType() == 1) neighbors += 1; if(n < (cells.at(m).size() - 1)) { if (cells[m-1][n+1]->getType() == 1) neighbors += 1; } } if(n > 0) { if (cells[m][n-1]->getType() == 1) neighbors += 1; if(m < (cells.size() - 1)) { if (cells[m+1][n-1]->getType() == 1) neighbors += 1; } } if(n < (cells.at(m).size() - 1)) { if (cells[m][n+1]->getType() == 1) neighbors += 1; } if(m < (cells.size() - 1)) { if (cells[m+1][n]->getType() == 1) neighbors += 1; } if(m < (cells.size() - 1) && n < (cells.at(m).size() - 1)) { if (cells[m+1][n+1]->getType() == 1) neighbors += 1; } // Done checking number of neighbors for cells[m][n] // Now we change cells2 if it should switch in the next simulation step. // cells2 holds the values of what cells should be on the next iteration of the game. // We can't change cells right now, or it would through off our other cell values. // Apply game rules to cells: Create new, updated grid with the roundtwo vector. // Note - LIVE is 1; DEAD is 0. if (cells[m][n]->getType() == 1 && neighbors < 2) // If cell is LIVE and has less than 2 LIVE neighbors -> Set to DEAD. cells2[m][n]->setType(DEAD); else if (cells[m][n]->getType() == 1 && neighbors > 3) // If cell is LIVE and has more than 3 LIVE neighbors -> Set to DEAD. cells2[m][n]->setType(DEAD); else if (cells[m][n]->getType() == 1 && (neighbors == 2 || neighbors == 3)) // If cell is LIVE and has 2 or 3 LIVE neighbors -> Set to LIVE. cells2[m][n]->setType(LIVE); else if (cells[m][n]->getType() == 0 && neighbors == 3) // If cell is DEAD and has 3 LIVE neighbors -> Set to LIVE. cells2[m][n]->setType(LIVE); } } // Now we've gone through all of cells, and saved the new values in cells2. // Now we loop through cells and set all the cells' types to those of cells2. for (unsigned int f = 0; f < cells.size(); f++) // Loop through cells' rows. { for (unsigned int g = 0; g < cells.at(f).size(); g++) // Loop through cells columns. { cells[f][g]->setType(cells2[f][g]->getType()); // Set cells[f][g]'s type to cells2[f][g]'s type. } } } stack.h - Here's my stack. #ifndef STACK_H_ #define STACK_H_ #include <iostream> #include "node.h" template <typename T> class Stack { private: Node<T>* top; int listSize; public: Stack(); int size() const; bool empty() const; void push(const T& value); void pop(); T& peek() const; }; template <typename T> Stack<T>::Stack() : top(NULL) { listSize = 0; } template <typename T> int Stack<T>::size() const { return listSize; } template <typename T> bool Stack<T>::empty() const { if(listSize == 0) return true; else return false; } template <typename T> void Stack<T>::push(const T& value) { Node<T>* newOne = new Node<T>(value); newOne->next = top; top = newOne; listSize++; } template <typename T> void Stack<T>::pop() { Node<T>* oldT = top; top = top->next; delete oldT; listSize--; } template <typename T> T& Stack<T>::peek() const { return top->data; // Returns data in top item. } #endif gridcell.cpp - Gridcell implementation #include <iostream> #include "gridcell.h" using namespace std; // Constructor: Creates a grid cell. GridCell::GridCell(QWidget *parent) : QFrame(parent) { this->type = DEAD; // Default: Cell is DEAD (white). setFrameStyle(QFrame::Box); // Set the frame style. This is what gives each box its black border. this->button = new QPushButton(this); //Creates button that fills entirety of each grid cell. this->button->setSizePolicy(QSizePolicy::Expanding,QSizePolicy::Expanding); // Expands button to fill space. this->button->setMinimumSize(19,19); //width,height // Min height and width of button. QHBoxLayout *layout = new QHBoxLayout(); //Creates a simple layout to hold our button and add the button to it. layout->addWidget(this->button); setLayout(layout); layout->setStretchFactor(this->button,1); // Lets the buttons expand all the way to the edges of the current frame with no space leftover layout->setContentsMargins(0,0,0,0); layout->setSpacing(0); connect(this->button,SIGNAL(clicked()),this,SLOT(handleClick())); // Connects clicked signal with handleClick slot. redrawCell(); // Calls function to redraw (set new type for) the cell. } // Basic destructor. GridCell::~GridCell() { delete this->button; } // Accessor for the cell type. CellType GridCell::getType() const { return(this->type); } // Mutator for the cell type. Also has the side effect of causing the cell to be redrawn on the GUI. void GridCell::setType(CellType type) { this->type = type; redrawCell(); // Sets type and redraws cell. } // Handler slot for button clicks. This method is called whenever the user clicks on this cell in the grid. void GridCell::handleClick() { // When clicked on... if(this->type == DEAD) // If type is DEAD (white), change to LIVE (black). type = LIVE; else type = DEAD; // If type is LIVE (black), change to DEAD (white). setType(type); // Sets new type (color). setType Calls redrawCell() to recolor. } // Method to check cell type and return the color of that type. Qt::GlobalColor GridCell::getColorForCellType() { switch(this->type) { default: case DEAD: return Qt::white; case LIVE: return Qt::black; } } // Helper method. Forces current cell to be redrawn on the GUI. Called whenever the setType method is invoked. void GridCell::redrawCell() { Qt::GlobalColor gc = getColorForCellType(); //Find out what color this cell should be. this->button->setPalette(QPalette(gc,gc)); //Force the button in the cell to be the proper color. this->button->setAutoFillBackground(true); this->button->setFlat(true); //Force QT to NOT draw the borders on the button } Thanks a lot. Let me know if you need anything else.

    Read the article

  • Exalogic 2.0.1 Tea Break Snippets - Scripting Asset Creation

    - by The Old Toxophilist
    So far in this series we have looked at creating asset within the EMOC BUI but the Exalogic 2.0.1 installation also provide the Iaas cli as an alternative to most of the common functionality available within EMOC. The IaaS cli interface provides access to the functions that are available to a user logged into the BUI with the CloudUser Role. As such not all functionality is available from the command line interface however having said that the IaaS cli provides all the functionality required to create the Assets within a specific Account (Tenure). Because these action are common and repeatable I decided to wrap the functionality within a simple script that takes a simple input file and creates the Asset. Following the Script through will show us the required steps needed to create the various Assets within an Account and hence I will work through the various functions within the script below describing the steps. You will note from the various steps within the script that it is designed to pause between actions allowing the proceeding action to complete. The reason for this is because we could swamp EMOC with a series of actions and may end up with a situation where we are trying to action a Volume attached before the creation of the vServer and Volume have completed. processAssets() This function simply reads through the passed input file identifying what assets need to be created. An example of the input file can be found below. It can be seen that the input file can be used to create Assets in multiple Accounts during a single run. The order of the entries define the functions that need to be actioned as follows: Input Command Iaas Actions Parameters Production:Connect akm-describe-accounts akm-create-access-key iaas-create-key-pair iaas-describe-vnets iaas-describe-vserver-types iaas-describe-server-templates Username Password Production:Create|vServer iaas-run-vserver vServer Name vServer Type Name Template Name Comma separated list of network names which the vServer will connect to. Comma separated list of IPs for the specified networks. Production:Create|Volume iaas-create-volume Volume Name Volume Size Production:Attach|Volume iaas-attach-volumes-to-vserver vServer Name Comma separated list of volume names Production:Disconnect iaas-delete-key-pair akm-delete-access-key None connectToAccount() It can be seen from the connectToAccount function that before we can execute any Asset creation we must first connect to the appropriate account. To do this we will need the ID associated with the Account. This can be found by executing the akm-describe-accounts cli command which will return a list of all Accounts and there IDs. Once we have the Account ID we generate and Access key using the akm-create-access-key command and then a keypair with the iaas-create-key-pair command. At this point we now have all the information we need to access the specific named account. createVServer() This function simply retrieved the information from the input line and then will create the vServer using the iaas-run-vserver cli command. Reading the function you will notice that it takes the various input names for vServer Type, Template and Networks and converts them into the appropriate IDs. The IaaS cli will not work directly with component names and hence all IDs need to be found. createVolume() Function that simply takes the Volume name and Size then executes the iaas-create-volume command to create the volume. attachVolume() Takes the name of the Volume, which we may have just created, and a Volume then identifies the appropriate IDs before assigning the Volume to the vServer with the iaas-attach-volumes-to-vserver. disconnectFromAccount() Once we have finished connecting to the Account we simply remove the key pair with iaas-delete-key-pair and the access key with akm-delete-access-key although it may be useful to keep this if ssh is required and you do not subsequently modify the sshd information to allow unsecured access. By default the key is required for ssh access when a vServer is created from the command-line. CreateAssets.sh 1 export OCCLI=/opt/sun/occli/bin 2 export IAAS_HOME=/opt/oracle/iaas/cli 3 export JAVA_HOME=/usr/java/latest 4 export IAAS_BASE_URL=https://127.0.0.1 5 export IAAS_ACCESS_KEY_FILE=iaas_access.key 6 export KEY_FILE=iaas_access.pub 7 #CloudUser used to create vServers & Volumes 8 export IAAS_USER=exaprod 9 export IAAS_PASSWORD_FILE=root.pwd 10 export KEY_NAME=cli.recreate 11 export INPUT_FILE=CreateAssets.in 12 13 export ACCOUNTS_FILE=accounts.out 14 export VOLUMES_FILE=volumes.out 15 export DISTGRPS_FILE=distgrp.out 16 export VNETS_FILE=vnets.out 17 export VSERVER_TYPES_FILE=vstype.out 18 export VSERVER_FILE=vserver.out 19 export VSERVER_TEMPLATES=template.out 20 export KEY_PAIRS=keypairs.out 21 22 PROCESSING_ACCOUNT="" 23 24 function cleanTempFiles() { 25 rm -f $ACCOUNTS_FILE $VOLUMES_FILE $DISTGRPS_FILE $VNETS_FILE $VSERVER_TYPES_FILE $VSERVER_FILE $VSERVER_TEMPLATES $KEY_PAIRS $IAAS_PASSWORD_FILE $KEY_FILE $IAAS_ACCESS_KEY_FILE 26 } 27 28 function connectToAccount() { 29 if [[ "$ACCOUNT" != "$PROCESSING_ACCOUNT" ]] 30 then 31 if [[ "" != "$PROCESSING_ACCOUNT" ]] 32 then 33 $IAAS_HOME/bin/iaas-delete-key-pair --key-name $KEY_NAME --access-key-file $IAAS_ACCESS_KEY_FILE 34 $IAAS_HOME/bin/akm-delete-access-key $AK 35 fi 36 PROCESSING_ACCOUNT=$ACCOUNT 37 IAAS_USER=$ACCOUNT_USER 38 echo "$ACCOUNT_PASSWORD" > $IAAS_PASSWORD_FILE 39 $IAAS_HOME/bin/akm-describe-accounts --sep "|" > $ACCOUNTS_FILE 40 while read line 41 do 42 ACCOUNT_ID=${line%%|*} 43 line=${line#*|} 44 ACCOUNT_NAME=${line%%|*} 45 # echo "Id = $ACCOUNT_ID" 46 # echo "Name = $ACCOUNT_NAME" 47 if [[ "$ACCOUNT_NAME" == "$ACCOUNT" ]] 48 then 49 echo "Found Production Account $line" 50 AK=`$IAAS_HOME/bin/akm-create-access-key --account $ACCOUNT_ID --access-key-file $IAAS_ACCESS_KEY_FILE` 51 KEYPAIR=`$IAAS_HOME/bin/iaas-create-key-pair --key-name $KEY_NAME --key-file $KEY_FILE` 52 echo "Connected to $ACCOUNT_NAME" 53 break 54 fi 55 done < $ACCOUNTS_FILE 56 fi 57 } 58 59 function disconnectFromAccount() { 60 $IAAS_HOME/bin/iaas-delete-key-pair --key-name $KEY_NAME --access-key-file $IAAS_ACCESS_KEY_FILE 61 $IAAS_HOME/bin/akm-delete-access-key $AK 62 PROCESSING_ACCOUNT="" 63 } 64 65 function getNetworks() { 66 $IAAS_HOME/bin/iaas-describe-vnets --sep "|" > $VNETS_FILE 67 } 68 69 function getVSTypes() { 70 $IAAS_HOME/bin/iaas-describe-vserver-types --sep "|" > $VSERVER_TYPES_FILE 71 } 72 73 function getTemplates() { 74 $IAAS_HOME/bin/iaas-describe-server-templates --sep "|" > $VSERVER_TEMPLATES 75 } 76 77 function getVolumes() { 78 $IAAS_HOME/bin/iaas-describe-volumes --sep "|" > $VOLUMES_FILE 79 } 80 81 function getVServers() { 82 $IAAS_HOME/bin/iaas-describe-vservers --sep "|" > $VSERVER_FILE 83 } 84 85 function getNetworkId() { 86 while read line 87 do 88 NETWORK_ID=${line%%|*} 89 line=${line#*|} 90 NAME=${line%%|*} 91 if [[ "$NAME" == "$NETWORK_NAME" ]] 92 then 93 break 94 fi 95 done < $VNETS_FILE 96 } 97 98 function getVSTypeId() { 99 while read line 100 do 101 VSTYPE_ID=${line%%|*} 102 line=${line#*|} 103 NAME=${line%%|*} 104 if [[ "$VSTYPE_NAME" == "$NAME" ]] 105 then 106 break 107 fi 108 done < $VSERVER_TYPES_FILE 109 } 110 111 function getTemplateId() { 112 while read line 113 do 114 TEMPLATE_ID=${line%%|*} 115 line=${line#*|} 116 NAME=${line%%|*} 117 if [[ "$TEMPLATE_NAME" == "$NAME" ]] 118 then 119 break 120 fi 121 done < $VSERVER_TEMPLATES 122 } 123 124 function getVolumeId() { 125 while read line 126 do 127 export VOLUME_ID=${line%%|*} 128 line=${line#*|} 129 NAME=${line%%|*} 130 if [[ "$NAME" == "$VOLUME_NAME" ]] 131 then 132 break; 133 fi 134 done < $VOLUMES_FILE 135 } 136 137 function getVServerId() { 138 while read line 139 do 140 VSERVER_ID=${line%%|*} 141 line=${line#*|} 142 NAME=${line%%|*} 143 if [[ "$VSERVER_NAME" == "$NAME" ]] 144 then 145 break; 146 fi 147 done < $VSERVER_FILE 148 } 149 150 function getVServerState() { 151 getVServers 152 while read line 153 do 154 VSERVER_ID=${line%%|*} 155 line=${line#*|} 156 NAME=${line%%|*} 157 line=${line#*|} 158 line=${line#*|} 159 VSERVER_STATE=${line%%|*} 160 if [[ "$VSERVER_NAME" == "$NAME" ]] 161 then 162 break; 163 fi 164 done < $VSERVER_FILE 165 } 166 167 function pauseUntilVServerRunning() { 168 # Wait until the Server is running before creating the next 169 getVServerState 170 while [[ "$VSERVER_STATE" != "RUNNING" ]] 171 do 172 getVServerState 173 echo "$NAME $VSERVER_STATE" 174 if [[ "$VSERVER_STATE" != "RUNNING" ]] 175 then 176 echo "Sleeping......." 177 sleep 60 178 fi 179 if [[ "$VSERVER_STATE" == "FAILED" ]] 180 then 181 echo "Will Delete $NAME in 5 Minutes....." 182 sleep 300 183 deleteVServer 184 echo "Deleted $NAME waiting 5 Minutes....." 185 sleep 300 186 break 187 fi 188 done 189 # Lets pause for a minute or two 190 echo "Just Chilling......" 191 sleep 60 192 echo "Ahhhhh we're getting there......." 193 sleep 60 194 echo "I'm almost at one with the universe......." 195 sleep 60 196 echo "Bong Reality Check !" 197 } 198 199 function deleteVServer() { 200 $IAAS_HOME/bin/iaas-terminate-vservers --force --vserver-ids $VSERVER_ID 201 } 202 203 function createVServer() { 204 VSERVER_NAME=${ASSET_DETAILS%%|*} 205 ASSET_DETAILS=${ASSET_DETAILS#*|} 206 VSTYPE_NAME=${ASSET_DETAILS%%|*} 207 ASSET_DETAILS=${ASSET_DETAILS#*|} 208 TEMPLATE_NAME=${ASSET_DETAILS%%|*} 209 ASSET_DETAILS=${ASSET_DETAILS#*|} 210 NETWORK_NAMES=${ASSET_DETAILS%%|*} 211 ASSET_DETAILS=${ASSET_DETAILS#*|} 212 IP_ADDRESSES=${ASSET_DETAILS%%|*} 213 # Get Ids associated with names 214 getVSTypeId 215 getTemplateId 216 # Convert Network Names to Ids 217 NETWORK_IDS="" 218 while true 219 do 220 NETWORK_NAME=${NETWORK_NAMES%%,*} 221 NETWORK_NAMES=${NETWORK_NAMES#*,} 222 getNetworkId 223 if [[ "$NETWORK_IDS" != "" ]] 224 then 225 NETWORK_IDS="$NETWORK_IDS,$NETWORK_ID" 226 else 227 NETWORK_IDS=$NETWORK_ID 228 fi 229 if [[ "$NETWORK_NAME" == "$NETWORK_NAMES" ]] 230 then 231 break 232 fi 233 done 234 # Create vServer 235 echo "About to execute : $IAAS_HOME/bin/iaas-run-vserver --name $VSERVER_NAME --key-name $KEY_NAME --vserver-type $VSTYPE_ID --server-template-id $TEMPLATE_ID --vnets $NETWORK_IDS --ip-addresses $IP_ADDRESSES" 236 $IAAS_HOME/bin/iaas-run-vserver --name $VSERVER_NAME --key-name $KEY_NAME --vserver-type $VSTYPE_ID --server-template-id $TEMPLATE_ID --vnets $NETWORK_IDS --ip-addresses $IP_ADDRESSES 237 pauseUntilVServerRunning 238 } 239 240 function createVolume() { 241 VOLUME_NAME=${ASSET_DETAILS%%|*} 242 ASSET_DETAILS=${ASSET_DETAILS#*|} 243 VOLUME_SIZE=${ASSET_DETAILS%%|*} 244 # Create Volume 245 echo "About to execute : $IAAS_HOME/bin/iaas-create-volume --name $VOLUME_NAME --size $VOLUME_SIZE" 246 $IAAS_HOME/bin/iaas-create-volume --name $VOLUME_NAME --size $VOLUME_SIZE 247 # Lets pause 248 echo "Just Waiting 30 Seconds......" 249 sleep 30 250 } 251 252 function attachVolume() { 253 VSERVER_NAME=${ASSET_DETAILS%%|*} 254 ASSET_DETAILS=${ASSET_DETAILS#*|} 255 VOLUME_NAMES=${ASSET_DETAILS%%|*} 256 # Get vServer Id 257 getVServerId 258 # Convert Volume Names to Ids 259 VOLUME_IDS="" 260 while true 261 do 262 VOLUME_NAME=${VOLUME_NAMES%%,*} 263 VOLUME_NAMES=${VOLUME_NAMES#*,} 264 getVolumeId 265 if [[ "$VOLUME_IDS" != "" ]] 266 then 267 VOLUME_IDS="$VOLUME_IDS,$VOLUME_ID" 268 else 269 VOLUME_IDS=$VOLUME_ID 270 fi 271 if [[ "$VOLUME_NAME" == "$VOLUME_NAMES" ]] 272 then 273 break 274 fi 275 done 276 # Attach Volumes 277 echo "About to execute : $IAAS_HOME/bin/iaas-attach-volumes-to-vserver --vserver-id $VSERVER_ID --volume-ids $VOLUME_IDS" 278 $IAAS_HOME/bin/iaas-attach-volumes-to-vserver --vserver-id $VSERVER_ID --volume-ids $VOLUME_IDS 279 # Lets pause 280 echo "Just Waiting 30 Seconds......" 281 sleep 30 282 } 283 284 function processAssets() { 285 while read line 286 do 287 ACCOUNT=${line%%:*} 288 line=${line#*:} 289 ACTION=${line%%|*} 290 line=${line#*|} 291 if [[ "$ACTION" == "Connect" ]] 292 then 293 ACCOUNT_USER=${line%%|*} 294 line=${line#*|} 295 ACCOUNT_PASSWORD=${line%%|*} 296 connectToAccount 297 298 ## Account Info 299 getNetworks 300 getVSTypes 301 getTemplates 302 303 continue 304 fi 305 if [[ "$ACTION" == "Create" ]] 306 then 307 ASSET=${line%%|*} 308 line=${line#*|} 309 ASSET_DETAILS=$line 310 if [[ "$ASSET" == "vServer" ]] 311 then 312 createVServer 313 314 continue 315 fi 316 if [[ "$ASSET" == "Volume" ]] 317 then 318 createVolume 319 320 continue 321 fi 322 fi 323 if [[ "$ACTION" == "Attach" ]] 324 then 325 ASSET=${line%%|*} 326 line=${line#*|} 327 ASSET_DETAILS=$line 328 if [[ "$ASSET" == "Volume" ]] 329 then 330 getVolumes 331 getVServers 332 attachVolume 333 334 continue 335 fi 336 fi 337 if [[ "$ACTION" == "Connect" ]] 338 then 339 disconnectFromAccount 340 341 continue 342 fi 343 done < $INPUT_FILE 344 } 345 346 # Should Parameterise this 347 348 while [ $# -gt 0 ] 349 do 350 case "$1" in 351 -a) INPUT_FILE="$2"; shift;; 352 *) echo ""; echo >&2 \ 353 "usage: $0 [-a <Asset Definition File>] (Default is CreateAssets.in)" 354 echo""; exit 1;; 355 *) break;; 356 esac 357 shift 358 done 359 360 361 362 363 processAssets 364 365 echo "**************************************" 366 echo "***** Finished Creating Assets *****" 367 echo "**************************************" 368 CreateAssetsProd.in Production:Connect|exaprod|welcome1 Production:Create|vServer|VS006|VSTProduction|BaseOEL56ServerTemplate|EoIB-otd-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.223.13,192.168.0.13,10.117.81.67,172.17.0.14 Production:Create|vServer|VS007|VSTProduction|BaseOEL56ServerTemplate|EoIB-otd-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.223.14,192.168.0.14,10.117.81.68,172.17.0.15 Production:Create|vServer|VS008|VSTProduction|BaseOEL56ServerTemplate|EoIB-wls-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.225.61,192.168.0.61,10.117.81.61,172.17.0.16 Production:Create|vServer|VS009|VSTProduction|BaseOEL56ServerTemplate|EoIB-wls-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.225.62,192.168.0.62,10.117.81.62,172.17.0.17 Production:Create|vServer|VS000|VSTProduction|BaseOEL56ServerTemplate|EoIB-wls-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.225.63,192.168.0.63,10.117.81.63,172.17.0.18 Production:Create|vServer|VS001|VSTProduction|BaseOEL56ServerTemplate|EoIB-wls-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.225.64,192.168.0.64,10.117.81.64,172.17.0.19 Production:Create|vServer|VS002|VSTProduction|BaseOEL56ServerTemplate|EoIB-wls-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.225.65,192.168.0.65,10.117.81.65,172.17.0.20 Production:Create|vServer|VS003|VSTProduction|BaseOEL56ServerTemplate|EoIB-wls-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.225.66,192.168.0.66,10.117.81.66,172.17.0.21 Production:Create|Volume|VS006|50 Production:Create|Volume|VS007|50 Production:Create|Volume|VS008|50 Production:Create|Volume|VS009|50 Production:Create|Volume|VS000|50 Production:Create|Volume|VS001|50 Production:Create|Volume|VS002|50 Production:Create|Volume|VS003|50 Production:Attach|Volume|VS006|VS006 Production:Attach|Volume|VS007|VS007 Production:Attach|Volume|VS008|VS008 Production:Attach|Volume|VS009|VS009 Production:Attach|Volume|VS000|VS000 Production:Attach|Volume|VS001|VS001 Production:Attach|Volume|VS002|VS002 Production:Attach|Volume|VS003|VS003 Production:Disconnect Development:Connect|exadev|welcome1 Development:Create|vServer|VS014|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.24,10.117.81.71,172.17.0.24 Development:Create|vServer|VS015|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.25,10.117.81.72,172.17.0.25 Development:Create|vServer|VS016|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.26,10.117.81.73,172.17.0.26 Development:Create|vServer|VS017|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.27,10.117.81.74,172.17.0.27 Development:Create|vServer|VS018|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.28,10.117.81.75,172.17.0.28 Development:Create|vServer|VS019|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.29,10.117.81.76,172.17.0.29 Development:Create|vServer|VS020|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.30,10.117.81.77,172.17.0.30 Development:Create|vServer|VS021|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.31,10.117.81.78,172.17.0.31 Development:Create|vServer|VS022|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.32,10.117.81.79,172.17.0.32 Development:Create|vServer|VS023|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.33,10.117.81.80,172.17.0.33 Development:Create|vServer|VS024|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.34,10.117.81.81,172.17.0.34 Development:Create|vServer|VS025|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.35,10.117.81.82,172.17.0.35 Development:Create|vServer|VS026|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.36,10.117.81.83,172.17.0.36 Development:Create|vServer|VS027|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.37,10.117.81.84,172.17.0.37 Development:Create|Volume|VS014|50 Development:Create|Volume|VS015|50 Development:Create|Volume|VS016|50 Development:Create|Volume|VS017|50 Development:Create|Volume|VS018|50 Development:Create|Volume|VS019|50 Development:Create|Volume|VS020|50 Development:Create|Volume|VS021|50 Development:Create|Volume|VS022|50 Development:Create|Volume|VS023|50 Development:Create|Volume|VS024|50 Development:Create|Volume|VS025|50 Development:Create|Volume|VS026|50 Development:Create|Volume|VS027|50 Development:Attach|Volume|VS014|VS014 Development:Attach|Volume|VS015|VS015 Development:Attach|Volume|VS016|VS016 Development:Attach|Volume|VS017|VS017 Development:Attach|Volume|VS018|VS018 Development:Attach|Volume|VS019|VS019 Development:Attach|Volume|VS020|VS020 Development:Attach|Volume|VS021|VS021 Development:Attach|Volume|VS022|VS022 Development:Attach|Volume|VS023|VS023 Development:Attach|Volume|VS024|VS024 Development:Attach|Volume|VS025|VS025 Development:Attach|Volume|VS026|VS026 Development:Attach|Volume|VS027|VS027 Development:Disconnect This entry was originally posted on the The Old Toxophilist Site.

    Read the article

  • TFS 2010 SDK: Integrating Twitter with TFS Programmatically

    - by Tarun Arora
    Technorati Tags: Team Foundation Server 2010,TFS API,Integrate Twitter TFS,TFS Programming,ALM,TwitterSharp   Friends at ‘Twitter Sharp’ have created a wonderful .net API for twitter. With this blog post i will try to show you a basic TFS – Twitter integration scenario where i will retrieve the Team Project details programmatically and then publish these details on my twitter page. In future blogs i will be demonstrating how to create a windows service to capture the events raised by TFS and then publishing them in your social eco-system. Download Working Demo: Integrate Twitter - Tfs Programmatically   1. Setting up Twitter API Download Tweet Sharp from => https://github.com/danielcrenna/tweetsharp  Before you can start playing around with this, you will need to register an application on twitter. This is because Twitter uses the OAuth authentication protocol and will not issue an Access token unless your application is registered with them. Go to https://dev.twitter.com/ and register your application   Once you have registered your application, you will need ‘Customer Key’, ‘Customer Secret’, ‘Access Token’, ‘Access Token Secret’ 2. Connecting to Twitter using the Tweet Sharp API Create a new C# windows forms project and add reference to ‘Hammock.ClientProfile’, ‘Newtonsoft.Json’, ‘TweetSharp’ Add the following keys to the App.config (Note – The values for the keys below are in correct and if you try and connect using them then you will get an authorization failure error). Add a new class ‘TwitterProxy’ and use the following code to connect to the TwitterService (Read more about OAuthentication - http://dev.twitter.com/pages/auth) using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Configuration;using TweetSharp; namespace WindowsFormsApplication2{ public class TwitterProxy { private static string _hero; private static string _consumerKey; private static string _consumerSecret; private static string _accessToken; private static string _accessTokenSecret;  public static TwitterService ConnectToTwitter() { _consumerKey = ConfigurationManager.AppSettings["ConsumerKey"]; _consumerSecret = ConfigurationManager.AppSettings["ConsumerSecret"]; _accessToken = ConfigurationManager.AppSettings["AccessToken"]; _accessTokenSecret = ConfigurationManager.AppSettings["AccessTokenSecret"];  return new TwitterService(_consumerKey, _consumerSecret, _accessToken, _accessTokenSecret); } }} Time to Tweet! _twitterService = Proxy.TwitterProxy.ConnectToTwitter(); _twitterService.SendTweet("Hello World"); SendTweet will return the TweetStatus, If you do not get a 200 OK status that means you have failed authentication, please revisit the Access tokens. --RESPONSE: https://api.twitter.com/1/statuses/update.json HTTP/1.1 200 OK X-Transaction: 1308476106-69292-41752 X-Frame-Options: SAMEORIGIN X-Runtime: 0.03040 X-Transaction-Mask: a6183ffa5f44ef11425211f25 Pragma: no-cache X-Access-Level: read-write X-Revision: DEV X-MID: bd8aa0abeccb6efba38bc0a391a73fab98e983ea Cache-Control: no-cache, no-store, must-revalidate, pre-check=0, post-check=0 Content-Type: application/json; charset=utf-8 Date: Sun, 19 Jun 2011 09:35:06 GMT Expires: Tue, 31 Mar 1981 05:00:00 GMT Last-Modified: Sun, 19 Jun 2011 09:35:06 GMT Server: hi Vary: Accept-Encoding Content-Encoding: Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Transfer-Encoding: chunked   3. Integrate with TFS In my blog post Connect to TFS Programmatically i have in depth demonstrated how to connect to TFS using the TFS API. 1: // Update the AppConfig with the URI of the Team Foundation Server you want to connect to, Make sure you have View Team Project Collection Details permissions on the server 2: private static string _myUri = ConfigurationManager.AppSettings["TfsUri"]; 3: private static TwitterService _twitterService = null; 4:   5: private void button1_Click(object sender, EventArgs e) 6: { 7: lblNotes.Text = string.Empty; 8:   9: try 10: { 11: StringBuilder notes = new StringBuilder(); 12:   13: _twitterService = Proxy.TwitterProxy.ConnectToTwitter(); 14:   15: _twitterService.SendTweet("Hello World"); 16:   17: TfsConfigurationServer configurationServer = 18: TfsConfigurationServerFactory.GetConfigurationServer(new Uri(_myUri)); 19:   20: CatalogNode catalogNode = configurationServer.CatalogNode; 21:   22: ReadOnlyCollection<CatalogNode> tpcNodes = catalogNode.QueryChildren( 23: new Guid[] { CatalogResourceTypes.ProjectCollection }, 24: false, CatalogQueryOptions.None); 25:   26: // tpc = Team Project Collection 27: foreach (CatalogNode tpcNode in tpcNodes) 28: { 29: Guid tpcId = new Guid(tpcNode.Resource.Properties["InstanceId"]); 30: TfsTeamProjectCollection tpc = configurationServer.GetTeamProjectCollection(tpcId); 31:   32: notes.AppendFormat("{0} Team Project Collection : {1}{0}", Environment.NewLine, tpc.Name); 33: _twitterService.SendTweet(String.Format("http://Lunartech.codeplex.com - Connecting to Team Project Collection : {0} ", tpc.Name)); 34:   35: // Get catalog of tp = 'Team Projects' for the tpc = 'Team Project Collection' 36: var tpNodes = tpcNode.QueryChildren( 37: new Guid[] { CatalogResourceTypes.TeamProject }, 38: false, CatalogQueryOptions.None); 39:   40: foreach (var p in tpNodes) 41: { 42: notes.AppendFormat("{0} Team Project : {1} - {2}{0}", Environment.NewLine, p.Resource.DisplayName,  "This is an open source project hosted on codeplex"); 43: _twitterService.SendTweet(String.Format(" Connected to Team Project: '{0}' – '{1}' ", p.Resource.DisplayName, "This is an open source project hosted on codeplex")); 44: } 45: } 46: notes.AppendFormat("{0} Updates posted on Twitter : {1} {0}", Environment.NewLine, @"http://twitter.com/lunartech1"); 47: lblNotes.Text = notes.ToString(); 48: } 49: catch (Exception ex) 50: { 51: lblError.Text = " Message : " + ex.Message + (ex.InnerException != null ? " Inner Exception : " + ex.InnerException : string.Empty); 52: } 53: }   The extensions you can build integrating TFS and Twitter are incredible!   Share this post :

    Read the article

  • Deploying Data Mining Models using Model Export and Import, Part 2

    - by [email protected]
    In my last post, Deploying Data Mining Models using Model Export and Import, we explored using DBMS_DATA_MINING.EXPORT_MODEL and DBMS_DATA_MINING.IMPORT_MODEL to enable moving a model from one system to another. In this post, we'll look at two distributed scenarios that make use of this capability and a tip for easily moving models from one machine to another using only Oracle Database, not an external file transport mechanism, such as FTP. The first scenario, consider a company with geographically distributed business units, each collecting and managing their data locally for the products they sell. Each business unit has in-house data analysts that build models to predict which products to recommend to customers in their space. A central telemarketing business unit also uses these models to score new customers locally using data collected over the phone. Since the models recommend different products, each customer is scored using each model. This is depicted in Figure 1.Figure 1: Target instance importing multiple remote models for local scoring In the second scenario, consider multiple hospitals that collect data on patients with certain types of cancer. The data collection is standardized, so each hospital collects the same patient demographic and other health / tumor data, along with the clinical diagnosis. Instead of each hospital building it's own models, the data is pooled at a central data analysis lab where a predictive model is built. Once completed, the model is distributed to hospitals, clinics, and doctor offices who can score patient data locally.Figure 2: Multiple target instances importing the same model from a source instance for local scoring Since this blog focuses on model export and import, we'll only discuss what is necessary to move a model from one database to another. Here, we use the package DBMS_FILE_TRANSFER, which can move files between Oracle databases. The script is fairly straightforward, but requires setting up a database link and directory objects. We saw how to create directory objects in the previous post. To create a database link to the source database from the target, we can use, for example: create database link SOURCE1_LINK connect to <schema> identified by <password> using 'SOURCE1'; Note that 'SOURCE1' refers to the service name of the remote database entry in your tnsnames.ora file. From SQL*Plus, first connect to the remote database and export the model. Note that the model_file_name does not include the .dmp extension. This is because export_model appends "01" to this name.  Next, connect to the local database and invoke DBMS_FILE_TRANSFER.GET_FILE and import the model. Note that "01" is eliminated in the target system file name.  connect <source_schema>/<password>@SOURCE1_LINK; BEGIN  DBMS_DATA_MINING.EXPORT_MODEL ('EXPORT_FILE_NAME' || '.dmp',                                 'MY_SOURCE_DIR_OBJECT',                                 'name =''MY_MINING_MODEL'''); END; connect <target_schema>/<password>; BEGIN  DBMS_FILE_TRANSFER.GET_FILE ('MY_SOURCE_DIR_OBJECT',                               'EXPORT_FILE_NAME' || '01.dmp',                               'SOURCE1_LINK',                               'MY_TARGET_DIR_OBJECT',                               'EXPORT_FILE_NAME' || '.dmp' );  DBMS_DATA_MINING.IMPORT_MODEL ('EXPORT_FILE_NAME' || '.dmp',                                 'MY_TARGET_DIR_OBJECT'); END; To clean up afterward, you may want to drop the exported .dmp file at the source and the transferred file at the target. For example, utl_file.fremove('&directory_name', '&model_file_name' || '.dmp');

    Read the article

  • PhP Login/Register system [migrated]

    - by Marian
    I found this good tutorial on creating a login/register system using PhP and MySQL. The forum is around 5 years old (edited last year) but it can still be usefull. Beginner Simple Register-Login system There seems to be an issue with both login and register pages. <?php function register_form(){ $date = date('D, M, Y'); echo "<form action='?act=register' method='post'>" ."Username: <input type='text' name='username' size='30'><br>" ."Password: <input type='password' name='password' size='30'><br>" ."Confirm your password: <input type='password' name='password_conf' size='30'><br>" ."Email: <input type='text' name='email' size='30'><br>" ."<input type='hidden' name='date' value='$date'>" ."<input type='submit' value='Register'>" ."</form>"; } function register(){ $connect = mysql_connect("host", "username", "password"); if(!$connect){ die(mysql_error()); } $select_db = mysql_select_db("database", $connect); if(!$select_db){ die(mysql_error()); } $username = $_REQUEST['username']; $password = $_REQUEST['password']; $pass_conf = $_REQUEST['password_conf']; $email = $_REQUEST['email']; $date = $_REQUEST['date']; if(empty($username)){ die("Please enter your username!<br>"); } if(empty($password)){ die("Please enter your password!<br>"); } if(empty($pass_conf)){ die("Please confirm your password!<br>"); } if(empty($email)){ die("Please enter your email!"); } $user_check = mysql_query("SELECT username FROM users WHERE username='$username'"); $do_user_check = mysql_num_rows($user_check); $email_check = mysql_query("SELECT email FROM users WHERE email='$email'"); $do_email_check = mysql_num_rows($email_check); if($do_user_check > 0){ die("Username is already in use!<br>"); } if($do_email_check > 0){ die("Email is already in use!"); } if($password != $pass_conf){ die("Passwords don't match!"); } $insert = mysql_query("INSERT INTO users (username, password, email) VALUES ('$username', '$password', '$email')"); if(!$insert){ die("There's little problem: ".mysql_error()); } echo $username.", you are now registered. Thank you!<br><a href=login.php>Login</a> | <a href=index.php>Index</a>"; } switch($act){ default; register_form(); break; case "register"; register(); break; } ?> Once pressed the register button the page does nothing, fields are erased and no data is added inside the database or error given. I tought that the problem might be the switch($act){ part so I removed it and changed the page using a require require('connect.php'); where connect.php is <?php mysql_connect("localhost","host","password"); mysql_select_db("database"); ?> Removed the function register_form(){ and echo part turning it into an HTML code: <form action='register' method='post'> Username: <input type='text' name='username' size='30'><br> Password: <input type='password' name='password' size='30'><br> Confirm your password: <input type='password' name='password_conf' size='30'><br> Email: <input type='text' name='email' size='30'><br> <input type='hidden' name='date' value='$date'> <input type='submit' name="register" value='Register'> </form> And instead of having a function register(){ I replaced it with a if($register){ So when the Register button is pressed it runs the php code, but this edit doesn't seem to work either. So what can the problem be? If needed I can re-add this code on my Domain The login page has the same issue, nothing happens when the button is pressed beside emptying the fields.

    Read the article

  • Oracle GoldenGate 11g Release 2 Launch Webcast Replay Available

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";} For those of you who missed Oracle GoldenGate 11g Release 2 launch webcasts last week, the replay is now available from the following url. Harnessing the Power of the New Release of Oracle GoldenGate 11g I would highly recommend watching the webcast to meet many new features of the new release and hear the product management team respond to the questions from the audience in a nice long Q&A section. In my blog last week I listed the media coverage for this new release. There is a new article published by ITJungle talking about Oracle GoldenGate’s heterogeneity and support for DB2 for iSeries: Oracle Completes DB2/400 Support in Data Replication Tool As mentioned in last week’s blog, we received over 150 questions from the audience and in this blog I'd like to continue to post some of the frequently asked,  questions and their answers: Question: What are the fundamental differences between classic data capture and integrated data capture? Do both use the redo logs in the source database? Answer: Yes, they both use redo logs. Classic capture parses the redo log data directly, whereas the Integrated Capture lets the Oracle database parse the redo log record using an internal API. Question: Does GoldenGate version need to match Oracle Database version? Answer: No, they are not directly linked. Oracle GoldenGate 11g Release 2 supports Oracle Database version 10gR2 as well. For Oracle Database version 10gR1 and Oracle Database version 9i you will need GoldenGate11g Release 1 or lower. And for Oracle Database 8i you need Oracle GoldenGate 10 or earlier versions. Question: If I already use Data Guard, do I need GoldenGate? Answer: Data Guard is designed as the best disaster recovery solution for Oracle Database. If you would like to implement a bidirectional Active-Active replication solution or need to move data between heterogeneous systems, you will need GoldenGate. Question: On Compression and GoldenGate, if the source uses compression, is it required that the target also use compression? Answer: No, the source and target do not need to have the same compression settings. Question: Does GG support Advance Security Option on the Source database? Answer: Yes it does. Question: Can I use GoldenGate to upgrade the Oracle Database to 11g and do OS migration at the same time? Answer: Yes, this is a very common project where GoldenGate can eliminate downtime, give flexibility to test the target as needed, and minimize risks with fail-back option to the old environment. For more information on database upgrades please check out the following white papers: Best Practices for Migrating/Upgrading Oracle Database Using Oracle GoldenGate 11g Zero-Downtime Database Upgrades Using Oracle GoldenGate Question: Does GoldenGate create any trigger in the source database table level or row level to for real-time data integration? Answer: No, GoldenGate does not create triggers. Question: Can transformation be done after insert to destination table or need to be done before? Answer: It can happen in the Capture (Extract) process, in the  Delivery (Replicat) process, or in the target database. For more resources on Oracle GoldenGate 11gR2 please check out our Oracle GoldenGate 11gR2 resource kit as well.

    Read the article

  • socket connection failed, telnet OK

    - by cf16
    my problem is that I can't connect two comps through socket (windows xp and windows7) although the server created with socket is listening and I can telnet it. It receives then information and does what should be done, but if I run the corresponding socket client I get error 10061. Moreover I am behind firewall - these two comps are running within my LAN, the windows firewalls are turned off, comp1: 192.168.1.2 port 12345 comp1: 192.168.1.6 port 12345 router: 192.168.1.1 Maybe port forwarding could help? But most important for me is to answer why Sockets fail if telnet works fine. client: int main(){ // Initialize Winsock. WSADATA wsaData; int iResult = WSAStartup(MAKEWORD(2,2), &wsaData); if (iResult != NO_ERROR) printf("Client: Error at WSAStartup().\n"); else printf("Client: WSAStartup() is OK.\n"); // Create a socket. SOCKET m_socket; m_socket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); if (m_socket == INVALID_SOCKET){ printf("Client: socket() - Error at socket(): %ld\n", WSAGetLastError()); WSACleanup(); return 7; }else printf("Client: socket() is OK.\n"); // Connect to a server. sockaddr_in clientService; clientService.sin_family = AF_INET; //clientService.sin_addr.s_addr = inet_addr("77.64.240.156"); clientService.sin_addr.s_addr = inet_addr("192.168.1.5"); //clientService.sin_addr.s_addr = inet_addr("87.207.222.5"); clientService.sin_port = htons(12345); if (connect(m_socket, (SOCKADDR*)&clientService, sizeof(clientService)) == SOCKET_ERROR){ printf("Client: connect() - Failed to connect.\n"); wprintf(L"connect function failed with error: %ld\n", WSAGetLastError()); iResult = closesocket(m_socket); if (iResult == SOCKET_ERROR) wprintf(L"closesocket function failed with error: %ld\n", WSAGetLastError()); WSACleanup(); return 6; } // Send and receive data int bytesSent; int bytesRecv = SOCKET_ERROR; // Be careful with the array bound, provide some checking mechanism char sendbuf[200] = "Client: Sending some test string to server..."; char recvbuf[200] = ""; bytesSent = send(m_socket, sendbuf, strlen(sendbuf), 0); printf("Client: send() - Bytes Sent: %ld\n", bytesSent); while(bytesRecv == SOCKET_ERROR){ bytesRecv = recv(m_socket, recvbuf, 32, 0); if (bytesRecv == 0 || bytesRecv == WSAECONNRESET){ printf("Client: Connection Closed.\n"); break; }else printf("Client: recv() is OK.\n"); if (bytesRecv < 0) return 0; else printf("Client: Bytes received - %ld.\n", bytesRecv); } system("pause"); return 0; } server: int main(){ WORD wVersionRequested; WSADATA wsaData={0}; int wsaerr; // Using MAKEWORD macro, Winsock version request 2.2 wVersionRequested = MAKEWORD(2, 2); wsaerr = WSAStartup(wVersionRequested, &wsaData); if (wsaerr != 0){ /* Tell the user that we could not find a usable WinSock DLL.*/ printf("Server: The Winsock dll not found!\n"); return 0; }else{ printf("Server: The Winsock dll found!\n"); printf("Server: The status: %s.\n", wsaData.szSystemStatus); } /* Confirm that the WinSock DLL supports 2.2.*/ /* Note that if the DLL supports versions greater */ /* than 2.2 in addition to 2.2, it will still return */ /* 2.2 in wVersion since that is the version we */ /* requested. */ if (LOBYTE(wsaData.wVersion) != 2 || HIBYTE(wsaData.wVersion) != 2 ){ /* Tell the user that we could not find a usable WinSock DLL.*/ printf("Server: The dll do not support the Winsock version %u.%u!\n", LOBYTE(wsaData.wVersion), HIBYTE(wsaData.wVersion)); WSACleanup(); return 0; }else{ printf("Server: The dll supports the Winsock version %u.%u!\n", LOBYTE(wsaData.wVersion), HIBYTE(wsaData.wVersion)); printf("Server: The highest version this dll can support: %u.%u\n", LOBYTE(wsaData.wHighVersion), HIBYTE(wsaData.wHighVersion)); } //////////Create a socket//////////////////////// //Create a SOCKET object called m_socket. SOCKET m_socket; // Call the socket function and return its value to the m_socket variable. // For this application, use the Internet address family, streaming sockets, and the TCP/IP protocol. // using AF_INET family, TCP socket type and protocol of the AF_INET - IPv4 m_socket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); // Check for errors to ensure that the socket is a valid socket. if (m_socket == INVALID_SOCKET){ printf("Server: Error at socket(): %ld\n", WSAGetLastError()); WSACleanup(); //return 0; }else{ printf("Server: socket() is OK!\n"); } ////////////////bind////////////////////////////// // Create a sockaddr_in object and set its values. sockaddr_in service; // AF_INET is the Internet address family. service.sin_family = AF_INET; // "127.0.0.1" is the local IP address to which the socket will be bound. service.sin_addr.s_addr = htons(INADDR_ANY);//inet_addr("127.0.0.1");//htons(INADDR_ANY); //inet_addr("192.168.1.2"); // 55555 is the port number to which the socket will be bound. // using the htons for big-endian service.sin_port = htons(12345); // Call the bind function, passing the created socket and the sockaddr_in structure as parameters. // Check for general errors. if (bind(m_socket, (SOCKADDR*)&service, sizeof(service)) == SOCKET_ERROR){ printf("Server: bind() failed: %ld.\n", WSAGetLastError()); closesocket(m_socket); //return 0; }else{ printf("Server: bind() is OK!\n"); } // Call the listen function, passing the created socket and the maximum number of allowed // connections to accept as parameters. Check for general errors. if (listen(m_socket, 1) == SOCKET_ERROR) printf("Server: listen(): Error listening on socket %ld.\n", WSAGetLastError()); else{ printf("Server: listen() is OK, I'm waiting for connections...\n"); } // Create a temporary SOCKET object called AcceptSocket for accepting connections. SOCKET AcceptSocket; // Create a continuous loop that checks for connections requests. If a connection // request occurs, call the accept function to handle the request. printf("Server: Waiting for a client to connect...\n"); printf("***Hint: Server is ready...run your client program...***\n"); // Do some verification... while (1){ AcceptSocket = SOCKET_ERROR; while (AcceptSocket == SOCKET_ERROR){ AcceptSocket = accept(m_socket, NULL, NULL); } // else, accept the connection... note: now it is wrong implementation !!!!!!!! !! !! (only 1 char) // When the client connection has been accepted, transfer control from the // temporary socket to the original socket and stop checking for new connections. printf("Server: Client Connected! Mammamija. \n"); m_socket = AcceptSocket; char recvBuf[200]=""; char * rc=recvBuf; int bytesRecv=recv(m_socket,recvBuf,64,0); if(bytesRecv==0 || bytesRecv==WSAECONNRESET){ cout<<"server: connection closed.\n"; }else{ cout<<"server: recv() is OK.\n"; if(bytesRecv<0){ return 0; }else{ printf("server: bytes received: %ld.\n",recvBuf); } }

    Read the article

  • AS 400 Performance from .Net iSeries Provider

    - by Nathan
    Hey all, First off, I am not an AS 400 guy - at all. So please forgive me for asking any noobish questions here. Basically, I am working on a .Net application that needs to access the AS400 for some real-time data. Although I have the system working, I am getting very different performance results between queries. Typically, when I make the 1st request against a SPROC on the AS400, I am seeing ~ 14 seconds to get the full data set. After that initial call, any subsequent calls usually only take ~ 1 second to return. This performance improvement remains for ~ 20 mins or so before it takes 14 seconds again. The interesting part with this is that, if the stored procedure is executed directly on the iSeries Navigator, it always returns within milliseconds (no change in response time). I wonder if it is a caching / execution plan issue but I can only apply my SQL SERVER logic to the AS400, which is not always a match. Any suggestions on what I can do to recieve a more consistant response time or simply insight as to why the AS400 is acting in this manner when I was using the iSeries Data Provider for .Net? Is there a better access method that I should use? Just in case, here's the code I am using to connect to the AS400 Dim Conn As New IBM.Data.DB2.iSeries.iDB2Connection(ConnectionString) Dim Cmd As New IBM.Data.DB2.iSeries.iDB2Command("SPROC_NAME_HERE", Conn) Cmd.CommandType = CommandType.StoredProcedure Using Conn Conn.Open() Dim Reader = Cmd.ExecuteReader() Using Reader While Reader.Read() 'Do Something End While Reader.Close() End Using Conn.Close() End Using

    Read the article

  • Practical size limitations for RDBMS

    - by grenade
    I am working on a project that must store very large datasets and associated reference data. I have never come across a project that required tables quite this large. I have proved that at least one development environment cannot cope at the database tier with the processing required by the complex queries against views that the application layer generates (views with multiple inner and outer joins, grouping, summing and averaging against tables with 90 million rows). The RDBMS that I have tested against is DB2 on AIX. The dev environment that failed was loaded with 1/20th of the volume that will be processed in production. I am assured that the production hardware is superior to the dev and staging hardware but I just don't believe that it will cope with the sheer volume of data and complexity of queries. Before the dev environment failed, it was taking in excess of 5 minutes to return a small dataset (several hundred rows) that was produced by a complex query (many joins, lots of grouping, summing and averaging) against the large tables. My gut feeling is that the db architecture must change so that the aggregations currently provided by the views are performed as part of an off-peak batch process. Now for my question. I am assured by people who claim to have experience of this sort of thing (which I do not) that my fears are unfounded. Are they? Can a modern RDBMS (SQL Server 2008, Oracle, DB2) cope with the volume and complexity I have described (given an appropriate amount of hardware) or are we in the realm of technologies like Google's BigTable? I'm hoping for answers from folks who have actually had to work with this sort of volume at a non-theoretical level.

    Read the article

  • Bidirectional replication update record problem

    - by Mirek
    Hi, I would like to present you my problem related to SQL Server 2005 bidirectional replication. What do I need? My teamleader wants to solve one of our problems using bidirectional replication between two databases, each used by different application. One application creates records in table A, changes should replicate to second database into a copy of table A. When data on second server are changed, then those changes have to be propagated back to the first server. I am trying to achieve bidirectional transactional replication between two databases on one server, which is running SQL Server 2005. I have manage to set this up using scripts, established 2 publications and 2 read only subscriptions with loopback detection. Distributtion database is created, publishment on both databases is enabled. Distributor and publisher are up. We are using some rules to control, which records will be replicated, so we need to call our custom stored procedures during replication. So, articles are set to use update, insert and delete custom stored procedures. So far so good, but? Everything works fine, changes are replicating, until updates are done on both tables simultaneously or before changes are replicated (and that takes about 3-6 seconds). Both records then end up with different values. UPDATE db1.dbo.TestTable SET Col = 4 WHERE ID = 1 UPDATE db2.dbo.TestTable SET Col = 5 WHERE ID = 1 results to: db1.dbo.TestTable COL = 5 db2.dbo.TestTable COL = 4 But we want to have last change winning replication. Please, is there a way to solve my problem? How can I ensure same values in both records? Or is there easier solution than this kind of replication? I can provide sample replication script which I am using. I am looking forward for you ideas, Mirek

    Read the article

  • PHP Transferring Photos From One Oracle Database Table to Another

    - by Jonathan Swift
    I am attempting to transfer a set of photos (blobs) from one table to another across databases. I'm nearly there, except for binding the photo parameter. I have the following code: $conn_db1 = oci_pconnect('username', 'password', 'db1'); $conn_db2 = oci_pconnect('username', 'password', 'db2'); $parse_db1_select = oci_parse($conn_db1, "SELECT REF PID, BINARY_OBJECT PHOTOGRAPH FROM BLOBS"); $parse_db2_insert = oci_parse($conn_db2, "INSERT INTO PHOTOGRAPHS (PID, PHOTOGRAPH) VALUES (:pid, :photo)"); oci_execute($parse_db1_select); while ($row = oci_fetch_assoc($parse_db1_select)) { $pid = $row['PID']; $photo = $row['PHOTOGRAPH']; oci_bind_by_name($parse_db2_insert, ':pid', $pid, -1, OCI_B_INT); // This line causes an error oci_bind_by_name($parse_db_insert, ':photo', $photo, -1, OCI_B_BLOB); oci_execute($parse_db2_insert); } oci_close($db1); oci_close($db2); But I get the following error, on the error line commented above: Warning: oci_execute() [function.oci-execute]: ORA-03113: end-of-file on communication channel Process ID: 0 Session ID: 790 Serial number: 118 Does anyone know the right way to do this?

    Read the article

  • How to figure out which record has been deleted in an effiecient way?

    - by janetsmith
    Hi, I am working on an in-house ETL solution, from db1 (Oracle) to db2 (Sybase). We needs to transfer data incrementally (Change Data Capture?) into db2. I have only read access to tables, so I can't create any table or trigger in Oracle db1. The challenge I am facing is, how to detect record deletion in Oracle? The solution which I can think of, is by using additional standalone/embedded db (e.g. derby, h2 etc). This db contains 2 tables, namely old_data, new_data. old_data contains primary key field from tahle of interest in Oracle. Every time ETL process runs, new_data table will be populated with primary key field from Oracle table. After that, I will run the following sql command to get the deleted rows: SELECT old_data.id FROM old_data WHERE old_data.id NOT IN (SELECT new_data.id FROM new_data) I think this will be a very expensive operation when the volume of data become very large. Do you have any better idea of doing this? Thanks.

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >