Search Results

Search found 15377 results on 616 pages for 'socket programming'.

Page 260/616 | < Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >

  • Java UnknownHostException When and Why?

    - by interstar
    I'm trying to post to a website from a Processing Sketch. (Processing is basically Java running in a fancy environment). I'm using this library : http://libraries.seltar.org/postToWeb/ but I don't know if that makes a difference. You can see from the stack trace below that this is just a wrapper for the Java standard library. Anyway, the important point is that the host "mysite.com" is up and running. I am seeing it from the browser. But when I try to post to it from Java I just get the UnknownHostException appearing. Given that the site is up. What else might this mean? The program is currently running inside the Processing environment. Presumably as an Applet. java.net.UnknownHostException: mysite.com at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:195) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at java.net.Socket.connect(Socket.java:478) at sun.net.NetworkClient.doConnect(NetworkClient.java:163) at sun.net.www.http.HttpClient.openServer(HttpClient.java:394) at sun.net.www.http.HttpClient.openServer(HttpClient.java:529) at sun.net.www.http.HttpClient.<init>(HttpClient.java:233) at sun.net.www.http.HttpClient.New(HttpClient.java:306) at sun.net.www.http.HttpClient.New(HttpClient.java:323) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:970) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:911) at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:836) at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1014) at org.seltar.Bytes2Web.PostToWeb._post(PostToWeb.java:90) at org.seltar.Bytes2Web.ByteToWeb.post(ByteToWeb.java:66) at experimentPostToWeb.keyPressed(experimentPostToWeb.java:35) at processing.core.PApplet.handleKeyEvent(Unknown Source) at processing.core.PApplet.dequeueKeyEvents(Unknown Source) at processing.core.PApplet.handleDraw(Unknown Source) at processing.core.PApplet.run(Unknown Source) at java.lang.Thread.run(Thread.java:662)

    Read the article

  • Remotely connecting two non-local computers with sockets

    - by Velizar Hristov
    This question seems like something very obvious to ask, and yet I spent more than an hour trying to find an answer. First I host and wait for someone to connect. Then, from another instance of the application, I try to connect with a socket - for the constructor, I use InetAddress, port. The port is always right, and everything works if I use "localhost" for the address. However, if I type my IP, I get an IOException. I even sent the application to someone else, gave him my IP, and it didn't work. The aim of the application is to connect two computers. It's in Java. Here is the relevant code. Server: ServerSocket serverSocket = new ServerSocket(port); Socket clientSocket = serverSocket.accept(); Client: InetAddress a = InetAddress.getByName(ip); Socket s = new Socket(a, port); I don't get past that. Obviously, the values of int port and String ip are taken from text fields. Edit: the purpose of my application is to connect two non-local computers.

    Read the article

  • How OpenStack Swift handles concurrent restful API request?

    - by Chen Xie
    I installed a swift service and was trying to know the capability of handling concurrent request. So I created massive amount of threads in Java, and sent it via the RestFUL API Not surprisingly, when the number of requests climb up, the program started to throw out exceptions. Caused by: java.net.ConnectException: Connection timed out: connect at java.net.DualStackPlainSocketImpl.connect0(Native Method) at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:69) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:157) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at java.net.Socket.connect(Socket.java:528) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at sun.net.www.http.HttpClient.openServer(HttpClient.java:378) at sun.net.www.http.HttpClient.openServer(HttpClient.java:473) at sun.net.www.http.HttpClient.(HttpClient.java:203) But can anyone tell me how that time outhappened? I am curious of how SWIFT handles those requests. Is that by queuing the requests and because there are too many requests in the queue and wait for too long time and it's just get kicked out from the queue? If this holds, does it mean that it's an asynchronized mechanism to handle requests? Thanks.

    Read the article

  • No Buffer Space available(maximum connection reached?) Form Postgres EDB Driver

    - by Listening.Platform
    We are facing an exception while connecting to database through our java application. The stack trace is as follows com.edb.util.PSQLException: The connection attempt failed. at com.edb.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:189) at com.edb.core.ConnectionFactory.openConnection(ConnectionFactory.java:64) at com.edb.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:161) at com.edb.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:30) at com.edb.jdbc3.Jdbc3Connection.<init>(Jdbc3Connection.java:24) at com.edb.Driver.makeConnection(Driver.java:391) at com.edb.Driver.connect(Driver.java:266) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) ... 12 more Caused by: java.net.SocketException: No buffer space available (maximum connections reached?): connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(Unknown Source) at java.net.PlainSocketImpl.connectToAddress(Unknown Source) at java.net.PlainSocketImpl.connect(Unknown Source) at java.net.SocksSocketImpl.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.<init>(Unknown Source) at java.net.Socket.<init>(Unknown Source) at com.edb.core.PGStream.<init>(PGStream.java:70) at com.edb.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:115) ... 20 more When the error occured we were not able to connect to internet and DB and had to reboot the system. But the error occured again after 3 days at same code i.e while connecting to DB. We checked TCP connections using netstat. But there were not many TCP connections i.e it has not reached the max limit. Our application has multiple long running Java processes that pools the DB connections (not more than 60) and keeps it alive for firing the next query (as it has to poll the DB every 2 seconds). Some of the queries in our application are joining large tables (10 million records) to get the related data. We are using following System and applications Windows 2003 server SP2 Java 1.6 Postgres Plus Advanced server 8.4 Database edb-jdbc14.jar driver for connection DB from Java We have used the default configuration of Postgres DB except increasing the connection to 120 from 100. Has anybody encountred the same error with postgres edb driver? Can anybody help us finding the solution?

    Read the article

  • GUI Agent accepts statuses from Daemon and shows it using progress indicator

    - by Pavel
    Hi to all! My application is a GUI agent, which communicate with daemon through the unix domain socket, wrapped in CFSocket.... So there are main loop and added CFRunLoop source. Daemon sends statuses and agent shows it with a progress indicator. When there are any data on socket, callback function begin to work and at this time I have to immediately show the new window with progress indicator and increase counter. //this function initiate the runloop for listening socket - (int) AcceptDaemonConnection:(ConnectionRef)conn { int err = 0; conn->fSockCF = CFSocketCreateWithNative(NULL, (CFSocketNativeHandle) conn->fSockFD, kCFSocketAcceptCallBack, ConnectionGotData, NULL); if (conn->fSockCF == NULL) err = EINVAL; if (err == 0) { conn->fRunLoopSource = CFSocketCreateRunLoopSource(NULL, conn->fSockCF, 0); if (conn->fRunLoopSource == NULL) err = EINVAL; else CFRunLoopAddSource(CFRunLoopGetCurrent(), conn->fRunLoopSource, kCFRunLoopDefaultMode); CFRelease(conn->fRunLoopSource); } return err; } // callback function void ConnectionGotData(CFSocketRef s, CFSocketCallBackType type, CFDataRef address, const void * data, void * info) { #pragma unused(s) #pragma unused(address) #pragma unused(info) assert(type == kCFSocketAcceptCallBack); assert( (int *) data != NULL ); assert( (*(int *) data) != -1 ); TStatusUpdate status; int nativeSocket = *(int *) data; status = [agg AcceptPacket:nativeSocket]; // [stWindow InitNewWindow] inside [agg SendUpdateStatus:status.percent]; } AcceptPacket function receives packet from the socket and trying to show new window with progress indicator. Corresponding function is called, but nothing happens... I think, that I have to make work the main application loop with interrupting CFSocket loop... Or send a notification? No idea....

    Read the article

  • c++ connect() keeps returning WSATIMEDOUT over internet but not localy

    - by KaiserJohaan
    Hello, For some reason, my chat application always gets WSATIMEDOUT when trying to connect to another person over the internet. int len_ip = GetWindowTextLength(GetDlgItem(hWnd,ID_EDIT_IP)); char ipBuffer[16]; SendMessage(GetDlgItem(hWnd,ID_EDIT_IP),WM_GETTEXT,16,(LPARAM)ipBuffer); long host_ip = inet_addr(ipBuffer); int initializeConnection(long host_ip, HWND hWnd) { // initialize winsock WSADATA wdata; int result = WSAStartup(MAKEWORD(2,2),&wdata); if (result != 0) { return 0; } // setup socket tcp_sock = socket(AF_INET,SOCK_STREAM,IPPROTO_TCP); if (tcp_sock == INVALID_SOCKET) { return 0; } // setup socket address SOCKADDR_IN tcp_sock_addr; tcp_sock_addr.sin_family = AF_INET; tcp_sock_addr.sin_port = SERVER_TCP_PORT; tcp_sock_addr.sin_addr.s_addr = host_ip; // connect to server if (connect(tcp_sock,(SOCKADDR*)&tcp_sock_addr,sizeof(tcp_sock_addr)) == SOCKET_ERROR) { return 0; } HRESULT hr = WSAGetLastError(); // set socket in asynchronous mode if (WSAAsyncSelect(tcp_sock,hWnd,SOCKET_TCP, FD_READ | FD_WRITE | FD_CONNECT | FD_CLOSE) == SOCKET_ERROR) { return 0; } return 1; } For some reason it works perfectly fine on local network between computers, but totally screws up over the internet. WSATIMEDOUT is always returned (not connection refused, so its not a port problem). It makes me believe something is wrong with the IP but why on earth can it work on local addresses (like 192.168.2.4) Any ideas? Cheers

    Read the article

  • Isn't an Iterator in c++ a kind of a pointer?

    - by Bilthon
    Ok this time I decided to make a list using the STL. I need to create a dedicated TCP socket for each client. So everytime I've got a connection, I instantiate a socket and add a pointer to it on a list. list<MyTcp*> SocketList; //This is the list of pointers to sockets list<MyTcp*>::iterator it; //An iterator to the list of pointers to TCP sockets. Putting a new pointer to a socket was easy, but now every time the connection ends I should disconnect the socket and delete the pointer so I don't get a huge memory leak, right? well.. I thought I was doing ok by setting this: it=SocketList.begin(); while( it != SocketList.end() ){ if((*it)->getClientId() == id){ pSocket = it; // <-------------- compiler complains at this line SocketList.remove(pSocket); pSocket->Disconnect(); delete pSocket; break; } } But the compiler is saying this: error: invalid cast from type ‘std::_List_iterator<MyTcp*>’ to type ‘MyTcp*’ Can someone help me here? i thought I was doing things right, isn't an iterator at any given time just pointing to one of the elements of the set? how can I fix it?

    Read the article

  • java.net.BindException How can I clear the sockets or what ever is causing it?

    - by user2266067
    I need some help with, I guess a simple networking related problem I'm having. It will also help me better understand how all this works by knowing what isn't being .close()'ed. I'm sure this is pretty simple, but for me its all very new. This is the client program. I can most likely append the server then, if I can figure this out. Thanks public class Server { public static void main(String[] args) { start(); } static int start = 0; public static void start() { try { ServerSocket serverSocket = new ServerSocket(4567); Socket socket = serverSocket.accept(); //1) Take and echo input (In this case a message) BufferedReader bf = new BufferedReader(new InputStreamReader(socket.getInputStream())); String message = bf.readLine(); System.out.println("Message recieved from Client:" + message); //2) Response of client message PrintWriter printWriter = new PrintWriter(socket.getOutputStream(), true); printWriter.println("Server echoing back the message ' " + message + " ' from Client"); } catch (IOException e) { System.out.println("e " + e); System.exit(-1); } start++; clearUp(); if (start < 5) { System.out.println("Closing binds and Restarting" + start); start(); } } public void clearUp(){ //How would I clear the stuff that is left bound so I can restart via start() and avoid the java.net.BindException: Address already in use: JVM_Bind ? } } How would I clear the stuff that is left bound so I can restart via start() and avoid java.net.BindException: Address already in use: JVM_Bind ?

    Read the article

  • New to JEE; architecture suggestions for a service/daemon?

    - by Kate
    I am brand new to the JEE world. As an exercise to try and familiarize myself with JEE, I'm trying to create a tiered web-app, but I'm getting a little stuck on what the best way is to spin up a service in the background that does work. Parameters of the service: It must open and hold a socket connection and receive information from the connected server. There is a 1-to-1 correlation between a user and a new socket connection. So the idea is the user presses a button on the web-page, and somewhere on the server a socket connection is opened. For the remainder of the users session (or until the user presses some sort of disconnect button) the socket remains open and pushes received information to some sort of centralized store that servlets can query and return to the user via AJAX. Is there a JEE type way to handle this situation? Naturally what I would think to do is to just write a Java application that listens on a port that the servlets can connect to and spawns new threads that open these sockets, but that seems very ad-hoc to me. (PS: I am also new to Stack Overflow, so forgive me if it takes me some time to figure the site out!)

    Read the article

  • New to J2EE; architecture suggestions for a service/daemon?

    - by Kate
    I am brand new to the J2EE world. As an exercise to try and familiarize myself with J2EE, I'm trying to create a tiered web-app, but I'm getting a little stuck on what the best way is to spin up a service in the background that does work. Paramters of the service: It must open and hold a socket connection and receive information from the connected server. There is a 1-to-1 correlation between a user and a new socket connection. So the idea is the user presses a button on the web-page, and somewhere on the server a socket connection is opened. For the remainder of the users session (or until the user presses some sort of disconnect button) the socket remains open and pushes received information to some sort of centralized store that servlets can query and return to the user via AJAX. Is there a J2EE type way to handle this situation? Naturally what I would think to do is to just write a Java application that listens on a port that the servlets can connect to and spawns new threads that open these sockets, but that seems very ad-hoc to me. (PS: I am also new to Stack Overflow, so forgive me if it takes me some time to figure the site out!)

    Read the article

  • What is wrong with this Asynchronus task?

    - by bluebrain
    the method onPostExecute simply was not executed, I have seen 16 at LogCat but I can not see 16 in LogCAT. I tried to debug it, it seemed that it goes to the first line of the class (package line) after return statement. private class Client extends AsyncTask<Integer, Void, Integer> { protected Integer doInBackground(Integer... params) { Log.e(TAG,10+""); try { socket = new Socket(target, port); Log.e(TAG,11+""); oos = new ObjectOutputStream(socket.getOutputStream()); Log.e(TAG,14+""); ois = new ObjectInputStream(socket.getInputStream()); Log.e(TAG,15+""); } catch (UnknownHostException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } Log.e(TAG,16+""); return 1; } protected void onPostExecute(Integer result) { Log.e(TAG,13+""); try { Log.e(TAG,12+""); oos.writeUTF(key); Log.e(TAG,13+""); if (ois.readInt() == OKAY) { isConnected = true; Log.e(TAG,14+""); }else{ Log.e(TAG,15+""); isConnected = false; } } catch (IOException e) { e.printStackTrace(); isClosed = true; } } }

    Read the article

  • How can I close the output stream after a jsp has been included.

    - by stu
    I have a webpage that makes an ajax call to get some data. That data takes a long time to calculate, so what I did was the first ajax server call returns "loading..." and then the thread goes on to calculate the data and store it in a cache. meanwhile the client javascript checks back every few seconds with another ajax call to see if the cache has been loaded yet. Here's my problem, and it might not be a problem. After the initial ajax to the server call, I do a ...getServletContext().getRequestDispatcher(jsppath).include(request, response); then I use that thread to do the calculations. I don't mind tying up the webserver thread with this, but I want the browser to get the response and not wait for the server to close the socket. I can't tell if the server is closing the socket after the include, but I'm guessing it's not. So how can I forcibly close the stream after I've written out my response, before starting my long calculations? I tried o = response.getOutputStream(); o.close(); but I get an illegal state exception saying that the output stream has already been gotten (presumably by the jsp I'm including) So my qestions: 1) is the webserver closing the socket (I'm guessing not, because you could always include another jsp) 2) if it is as I assume not closing the socket, how do I do that?

    Read the article

  • Threaded Python port scanner

    - by Amnite
    I am having issues with a port scanner I'm editing to use threads. This is the basics for the original code: for i in range(0, 2000): s = socket(AF_INET, SOCK_STREAM) result = s.connect_ex((TargetIP, i)) if(result == 0) : c = "Port %d: OPEN\n" % (i,) s.close() This takes approx 33 minutes to complete. So I thought I'd thread it to make it run a little faster. This is my first threading project so it's nothing too extreme, but I've ran the following code for about an hour and get no exceptions yet no output. Am I just doing the threading wrong or what? import threading from socket import * import time a = 0 b = 0 c = "" d = "" def ScanLow(): global a global c for i in range(0, 1000): s = socket(AF_INET, SOCK_STREAM) result = s.connect_ex((TargetIP, i)) if(result == 0) : c = "Port %d: OPEN\n" % (i,) s.close() a += 1 def ScanHigh(): global b global d for i in range(1001, 2000): s = socket(AF_INET, SOCK_STREAM) result = s.connect_ex((TargetIP, i)) if(result == 0) : d = "Port %d: OPEN\n" % (i,) s.close() b += 1 Target = raw_input("Enter Host To Scan:") TargetIP = gethostbyname(Target) print "Start Scan On Host ", TargetIP Start = time.time() threading.Thread(target = ScanLow).start() threading.Thread(target = ScanHigh).start() e = a + b while e < 2000: f = raw_input() End = time.time() - Start print c print d print End g = raw_input()

    Read the article

  • 500 internal server error on certain page after a few hours

    - by Brian Leach
    I am getting a 500 Internal Server Error on a certain page of my site after a few hours of being up. I restart uWSGI instance with uwsgi --ini /home/metheuser/webapps/ers_portal/ers_portal_uwsgi.ini and it works again for a few hours. The rest of the site seems to be working. When I navigate to my_table, I am directed to the login page. But, I get the 500 error on my table page on login. I followed the instructions here to set up my nginx and uwsgi configs. That is, I have ers_portal_nginx.conf located i my app folder that is symlinked to /etc/nginx/conf.d/. I start my uWSGI "instance" (not sure what exactly to call it) in a Screen instance as mentioned above, with the .ini file located in my app folder My ers_portal_nginx.conf: server { listen 80; server_name www.mydomain.com; location / { try_files $uri @app; } location @app { include uwsgi_params; uwsgi_pass unix:/home/metheuser/webapps/ers_portal/run_web_uwsgi.sock; } } My ers_portal_uwsgi.ini: [uwsgi] #user info uid = metheuser gid = ers_group #application's base folder base = /home/metheuser/webapps/ers_portal #python module to import app = run_web module = %(app) home = %(base)/ers_portal_venv pythonpath = %(base) #socket file's location socket = /home/metheuser/webapps/ers_portal/%n.sock #permissions for the socket file chmod-socket = 666 #uwsgi varible only, does not relate to your flask application callable = app #location of log files logto = /home/metheuser/webapps/ers_portal/logs/%n.log Relevant parts of my views.py data_modification_time = None data = None def reload_data(): global data_modification_time, data, sites, column_names filename = '/home/metheuser/webapps/ers_portal/app/static/' + ec.dd_filename mtime = os.stat(filename).st_mtime if data_modification_time != mtime: data_modification_time = mtime with open(filename) as f: data = pickle.load(f) return data @a bunch of authentication stuff... @app.route('/') @app.route('/index') def index(): return render_template("index.html", title = 'Main',) @app.route('/login', methods = ['GET', 'POST']) def login(): login stuff... @app.route('/my_table') @login_required def my_table(): print 'trying to access data table...' data = reload_data() return render_template("my_table.html", title = "Rundata Viewer", sts = sites, cn = column_names, data = data) # dictionary of data I installed nginx via yum as described here (yesterday) I am using uWSGI installed in my venv via pip I am on CentOS 6 My uwsgi log shows: Wed Jun 11 17:20:01 2014 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 287] during GET /whm-server-status (127.0.0.1) IOError: write error [pid: 9586|app: 0|req: 135/135] 127.0.0.1 () {24 vars in 292 bytes} [Wed Jun 11 17:20:01 2014] GET /whm-server-status => generated 0 bytes in 3 msecs (HTTP/1.0 404) 2 headers in 0 bytes (0 switches on core 0) When its working, the print statement in the views "my_table" route prints into the log file. But not once it stops working. Any ideas?

    Read the article

  • Can't ssh tunnel to access a remote mysql server

    - by hobbes3
    I can't seem to figure out why I can't use ssh tunnel to connect to my remote MySQL server. I do ssh tunnel with [hobbes3@hobbes3] ~ $ ssh linode -L 3307:localhost:3306 Then on another terminal, I try [hobbes3@hobbes3] ~ $ mysql -h localhost -P 3307 -u root --protocol=tcp -p Enter password: ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 2 On the server, it shows this: root@li534-120 ~ # channel 4: open failed: connect failed: Connection refused Here is my my.cnf on the server: [mysqld] # Settings user and group are ignored when systemd is used (fedora >= 15). # If you need to run mysqld under different user or group, # customize your systemd unit file for mysqld according to the # instructions in http://fedoraproject.org/wiki/Systemd user=mysql datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 # Semisynchronous Replication # http://dev.mysql.com/doc/refman/5.5/en/replication-semisync.html # uncomment next line on MASTER ;plugin-load=rpl_semi_sync_master=semisync_master.so # uncomment next line on SLAVE ;plugin-load=rpl_semi_sync_slave=semisync_slave.so # Others options for Semisynchronous Replication ;rpl_semi_sync_master_enabled=1 ;rpl_semi_sync_master_timeout=10 ;rpl_semi_sync_slave_enabled=1 # http://dev.mysql.com/doc/refman/5.5/en/performance-schema.html ;performance_schema [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid [mysqld] port = 3306 socket=/var/lib/mysql/mysql.sock skip-external-locking key_buffer_size = 64M max_allowed_packet = 128M sort_buffer_size = 512K net_buffer_length = 8K read_buffer_size = 256K read_rnd_buffer_size = 512K myisam_sort_buffer_size = 8M thread_cache = 8 max_connections = 25 query_cache_size = 16M table_open_cache = 1024 table_definition_cache = 1024 tmp_table_size = 32M max_heap_table_size = 32M bind-address = 0.0.0.0 Now sure if this helps but here is the MySQL user list: mysql> select * from mysql.user; +-----------+------+-------------------------------------------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------+-------------+--------------+---------------+-------------+-----------------+----------------------+--------+-----------------------+ | Host | User | Password | Select_priv | Insert_priv | Update_priv | Delete_priv | Create_priv | Drop_priv | Reload_priv | Shutdown_priv | Process_priv | File_priv | Grant_priv | References_priv | Index_priv | Alter_priv | Show_db_priv | Super_priv | Create_tmp_table_priv | Lock_tables_priv | Execute_priv | Repl_slave_priv | Repl_client_priv | Create_view_priv | Show_view_priv | Create_routine_priv | Alter_routine_priv | Create_user_priv | Event_priv | Trigger_priv | Create_tablespace_priv | ssl_type | ssl_cipher | x509_issuer | x509_subject | max_questions | max_updates | max_connections | max_user_connections | plugin | authentication_string | +-----------+------+-------------------------------------------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------+-------------+--------------+---------------+-------------+-----------------+----------------------+--------+-----------------------+ | localhost | root | *664328D3C5E263F4FB25185681AAE7E92B01B2B0 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | | | | | 0 | 0 | 0 | 0 | | | | 127.0.0.1 | root | *664328D3C5E263F4FB25185681AAE7E92B01B2B0 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | | | | | 0 | 0 | 0 | 0 | | | | ::1 | root | *664328D3C5E263F4FB25185681AAE7E92B01B2B0 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | | | | | 0 | 0 | 0 | 0 | | | +-----------+------+-------------------------------------------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------+-------------+--------------+---------------+-------------+-----------------+----------------------+--------+-----------------------+ 3 rows in set (0.00 sec) I read about how MySQL treats localhost vs 127.0.0.1 as connecting via a socket or TCP, respectively. But I'm starting to get confused on what's really going on or if socket vs TCP is even the issue. Thanks in advance and I'm open for any tips and suggestions! Some more info: My MySQL client, running OS X 10.8.4, is mysql Ver 14.14 Distrib 5.6.10, for osx10.8 (x86_64) using EditLine wrapper My MySQL server, running on CentOS 6.4 32-bit, is mysql> SHOW VARIABLES LIKE "%version%"; +-------------------------+--------------------------------------+ | Variable_name | Value | +-------------------------+--------------------------------------+ | innodb_version | 1.1.8 | | protocol_version | 10 | | slave_type_conversions | | | version | 5.5.28 | | version_comment | MySQL Community Server (GPL) by Remi | | version_compile_machine | i686 | | version_compile_os | Linux | +-------------------------+--------------------------------------+ 7 rows in set (0.00 sec)

    Read the article

  • Is the Asus Lion Square compatible with an AMD Athlon II AM3?

    - by wag2639
    I bought an Asus Lion Square compatible with a AMD Athlon II X3 435 Socket AM3 processor? I know strictly speaking, the Lion Square specifies AM2 but I'm a little confused since AM2 and AM3 are suppose to be socket compatible (I'm a little confused here as well but I assume it means an AM3 board will support AM2/AM2+ CPUs). However, will there be a problem with chip height and spacing? Or do people have experience asking ASUS for a standoff adapter?

    Read the article

  • MySQL died during the night on a 12.04.1 Ubuntu

    - by Olivier
    I can't explain why, but somehow during the night, one of my MySQL running on an Ubuntu 12.04.1 box broke. The service is running but I can't login anymore (to SQL), the previous password is not working anymore. It does not looks like the server has been compromised (nothing in /var/auth.log) It looks like some automatic security upgrade (server is configured to perform those) has occured and broke something. The MySQL server has restarted a couple of times in the logs at the time errors started to happen (I get email when CRON task fail). In the logs it complains about an unset root password (I do have cron job running all day using SQL so the password was set & working for months). Anyway I can't login without password either! Do you have any idea of what could have happened? How do I get my databases back? This line looks strange : Nov 6 06:36:12 ns398758 mysqld_safe[6676]: ERROR: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'ALTER TABLE user ADD column Show_view_priv enum('N','Y') CHARACTER SET utf8 NOT ' at line 1 Here is the full log below : Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Nov 6 06:36:06 ns398758 mysqld_safe[6586]: PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER ! Nov 6 06:36:06 ns398758 mysqld_safe[6586]: To do so, start the server, then issue the following commands: Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Nov 6 06:36:06 ns398758 mysqld_safe[6586]: /usr/bin/mysqladmin -u root password 'new-password' Nov 6 06:36:06 ns398758 mysqld_safe[6586]: /usr/bin/mysqladmin -u root -h ns398758.ovh.net password 'new-password' Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Alternatively you can run: Nov 6 06:36:06 ns398758 mysqld_safe[6586]: /usr/bin/mysql_secure_installation Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Nov 6 06:36:06 ns398758 mysqld_safe[6586]: which will also give you the option of removing the test Nov 6 06:36:06 ns398758 mysqld_safe[6586]: databases and anonymous user created by default. This is Nov 6 06:36:06 ns398758 mysqld_safe[6586]: strongly recommended for production servers. Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Nov 6 06:36:06 ns398758 mysqld_safe[6586]: See the manual for more instructions. Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Please report any problems with the /usr/scripts/mysqlbug script! Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Nov 6 06:36:06 ns398758 mysqld_safe[6632]: 121106 6:36:06 [Note] Plugin 'FEDERATED' is disabled. Nov 6 06:36:06 ns398758 mysqld_safe[6632]: 121106 6:36:06 InnoDB: The InnoDB memory heap is disabled Nov 6 06:36:06 ns398758 mysqld_safe[6632]: 121106 6:36:06 InnoDB: Mutexes and rw_locks use GCC atomic builtins Nov 6 06:36:06 ns398758 mysqld_safe[6632]: 121106 6:36:06 InnoDB: Compressed tables use zlib 1.2.3.4 Nov 6 06:36:06 ns398758 mysqld_safe[6632]: 121106 6:36:06 InnoDB: Initializing buffer pool, size = 128.0M Nov 6 06:36:06 ns398758 mysqld_safe[6632]: 121106 6:36:06 InnoDB: Completed initialization of buffer pool Nov 6 06:36:06 ns398758 mysqld_safe[6632]: 121106 6:36:06 InnoDB: highest supported file format is Barracuda. Nov 6 06:36:07 ns398758 mysqld_safe[6632]: 121106 6:36:07 InnoDB: Waiting for the background threads to start Nov 6 06:36:08 ns398758 mysqld_safe[6632]: 121106 6:36:08 InnoDB: 1.1.8 started; log sequence number 29276459701 Nov 6 06:36:08 ns398758 mysqld_safe[6632]: 121106 6:36:08 InnoDB: Starting shutdown... Nov 6 06:36:09 ns398758 mysqld_safe[6632]: 121106 6:36:09 InnoDB: Shutdown completed; log sequence number 29276459701 Nov 6 06:36:11 ns398758 mysqld_safe[6676]: 121106 6:36:11 [Note] Plugin 'FEDERATED' is disabled. Nov 6 06:36:11 ns398758 mysqld_safe[6676]: 121106 6:36:11 InnoDB: The InnoDB memory heap is disabled Nov 6 06:36:11 ns398758 mysqld_safe[6676]: 121106 6:36:11 InnoDB: Mutexes and rw_locks use GCC atomic builtins Nov 6 06:36:11 ns398758 mysqld_safe[6676]: 121106 6:36:11 InnoDB: Compressed tables use zlib 1.2.3.4 Nov 6 06:36:11 ns398758 mysqld_safe[6676]: 121106 6:36:11 InnoDB: Initializing buffer pool, size = 128.0M Nov 6 06:36:11 ns398758 mysqld_safe[6676]: 121106 6:36:11 InnoDB: Completed initialization of buffer pool Nov 6 06:36:11 ns398758 mysqld_safe[6676]: 121106 6:36:11 InnoDB: highest supported file format is Barracuda. Nov 6 06:36:11 ns398758 mysqld_safe[6676]: 121106 6:36:11 InnoDB: Waiting for the background threads to start Nov 6 06:36:12 ns398758 mysqld_safe[6676]: 121106 6:36:12 InnoDB: 1.1.8 started; log sequence number 29276459701 Nov 6 06:36:12 ns398758 mysqld_safe[6676]: ERROR: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'ALTER TABLE user ADD column Show_view_priv enum('N','Y') CHARACTER SET utf8 NOT ' at line 1 Nov 6 06:36:12 ns398758 mysqld_safe[6676]: 121106 6:36:12 [ERROR] Aborting Nov 6 06:36:12 ns398758 mysqld_safe[6676]: Nov 6 06:36:12 ns398758 mysqld_safe[6676]: 121106 6:36:12 InnoDB: Starting shutdown... Nov 6 06:36:13 ns398758 mysqld_safe[6676]: 121106 6:36:13 InnoDB: Shutdown completed; log sequence number 29276459701 Nov 6 06:36:13 ns398758 mysqld_safe[6676]: 121106 6:36:13 [Note] /usr/sbin/mysqld: Shutdown complete Nov 6 06:36:13 ns398758 mysqld_safe[6676]: Nov 6 06:36:13 ns398758 mysqld_safe[6697]: 121106 6:36:13 [Note] Plugin 'FEDERATED' is disabled. Nov 6 06:36:13 ns398758 mysqld_safe[6697]: 121106 6:36:13 InnoDB: The InnoDB memory heap is disabled Nov 6 06:36:13 ns398758 mysqld_safe[6697]: 121106 6:36:13 InnoDB: Mutexes and rw_locks use GCC atomic builtins Nov 6 06:36:13 ns398758 mysqld_safe[6697]: 121106 6:36:13 InnoDB: Compressed tables use zlib 1.2.3.4 Nov 6 06:36:13 ns398758 mysqld_safe[6697]: 121106 6:36:13 InnoDB: Initializing buffer pool, size = 128.0M Nov 6 06:36:13 ns398758 mysqld_safe[6697]: 121106 6:36:13 InnoDB: Completed initialization of buffer pool Nov 6 06:36:13 ns398758 mysqld_safe[6697]: 121106 6:36:13 InnoDB: highest supported file format is Barracuda. Nov 6 06:36:13 ns398758 mysqld_safe[6697]: 121106 6:36:13 InnoDB: Waiting for the background threads to start Nov 6 06:36:14 ns398758 mysqld_safe[6697]: 121106 6:36:14 InnoDB: 1.1.8 started; log sequence number 29276459701 Nov 6 06:36:14 ns398758 mysqld_safe[6697]: 121106 6:36:14 InnoDB: Starting shutdown... Nov 6 06:36:15 ns398758 mysqld_safe[6697]: 121106 6:36:15 InnoDB: Shutdown completed; log sequence number 29276459701 Nov 6 06:36:15 ns398758 mysqld_safe[6718]: 121106 6:36:15 [Note] Plugin 'FEDERATED' is disabled. Nov 6 06:36:15 ns398758 mysqld_safe[6718]: 121106 6:36:15 InnoDB: The InnoDB memory heap is disabled Nov 6 06:36:15 ns398758 mysqld_safe[6718]: 121106 6:36:15 InnoDB: Mutexes and rw_locks use GCC atomic builtins Nov 6 06:36:15 ns398758 mysqld_safe[6718]: 121106 6:36:15 InnoDB: Compressed tables use zlib 1.2.3.4 Nov 6 06:36:15 ns398758 mysqld_safe[6718]: 121106 6:36:15 InnoDB: Initializing buffer pool, size = 128.0M Nov 6 06:36:15 ns398758 mysqld_safe[6718]: 121106 6:36:15 InnoDB: Completed initialization of buffer pool Nov 6 06:36:15 ns398758 mysqld_safe[6718]: 121106 6:36:15 InnoDB: highest supported file format is Barracuda. Nov 6 06:36:15 ns398758 mysqld_safe[6718]: 121106 6:36:15 InnoDB: Waiting for the background threads to start Nov 6 06:36:16 ns398758 mysqld_safe[6718]: 121106 6:36:16 InnoDB: 1.1.8 started; log sequence number 29276459701 Nov 6 06:36:16 ns398758 mysqld_safe[6718]: ERROR: 1050 Table 'plugin' already exists Nov 6 06:36:16 ns398758 mysqld_safe[6718]: 121106 6:36:16 [ERROR] Aborting Nov 6 06:36:16 ns398758 mysqld_safe[6718]: Nov 6 06:36:16 ns398758 mysqld_safe[6718]: 121106 6:36:16 InnoDB: Starting shutdown... Nov 6 06:36:17 ns398758 mysqld_safe[6718]: 121106 6:36:17 InnoDB: Shutdown completed; log sequence number 29276459701 Nov 6 06:36:17 ns398758 mysqld_safe[6718]: 121106 6:36:17 [Note] /usr/sbin/mysqld: Shutdown complete Nov 6 06:36:17 ns398758 mysqld_safe[6718]: Nov 6 06:36:19 ns398758 /etc/mysql/debian-start[6816]: Upgrading MySQL tables if necessary. Nov 6 06:36:20 ns398758 /etc/mysql/debian-start[6819]: /usr/bin/mysql_upgrade: the '--basedir' option is always ignored Nov 6 06:36:20 ns398758 /etc/mysql/debian-start[6819]: Looking for 'mysql' as: /usr/bin/mysql Nov 6 06:36:20 ns398758 /etc/mysql/debian-start[6819]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck Nov 6 06:36:20 ns398758 /etc/mysql/debian-start[6819]: Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' Nov 6 06:36:20 ns398758 /etc/mysql/debian-start[6819]: Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' Nov 6 06:36:20 ns398758 /etc/mysql/debian-start[6819]: col_digitas.acos OK Nov 6 06:36:20 ns398758 /etc/mysql/debian-start[6819]: col_digitas.aros OK ...

    Read the article

  • Cannot get Postgresql to start on Ubuntu Hardy

    - by Greg Arcara
    I am getting this error with Postgresql 8.4 on Ubuntu Hardy: $./postgres -D /usr/local/pgsql/data LOG: could not bind IPv4 socket: Cannot assign requested address HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry. WARNING: could not create listen socket for "localhost" FATAL: could not create any TCP/IP sockets Here is my hosts file content (been finding a lot of stuff about this so just posting it now: 127.0.0.1 localhost 127.0.1.1 Home-Dev

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • django/uwsgi/nginx invalid HTTP protocol !!!

    - by user66208
    Any idea why this error happen when accessing nginx? uwsgi is running with the command: /usr/sbin/uwsgi --socket /home/user/run/project.sock --chmod-socket --pidfile /home/user/project/uwsgi.pid --module project.wsgi_app --pythonpath /home/user/ -p 4 /home/user/project/wsgi_app.py: import sys import os sys.path.append(os.path.abspath(os.path.dirname(file))) sys.path.append('/home/user/project') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() Any help is appreciated.

    Read the article

  • Is a university education really worth it for a good programmer?

    - by Jon Purdy
    The title says it all, but here's the personal side of it: I've been doing design and programming for about as long as I can remember. If there's a programming problem, I can figure it out. (Though admittedly StackOverflow has allowed me to skip the figuring out and get straight to the doing in many instances.) I've made games, esoteric programming languages, and widgets and gizmos galore. I'm currently working on a general-purpose programming language. There's nothing I do better than programming. However, I'm just as passionate about design. Thus when I felt leaving high school that my design skills were lacking, I decided to attend university for New Media Design and Imaging, a digital design-related major. For a year, I diligently studied art and programmed in my free time. As the next year progressed, however, I was obligated to take fewer art and design classes and more technical classes. The trouble was of course that these classes were geared toward non-technical students, and were far beneath my skill level at the time. No amount of petitioning could overcome the institution's reluctance to allow me to test out of such classes, and the major offered no promise for any greater challenge in the future, so I took the extreme route: I switched into the technical equivalent of the major, New Media Interactive Development. A lot of my credits moved over into the new major, but many didn't. It would have been infeasible to switch to a more rigorous technical major such as Computer Science, and having tutored Computer Science students at every level here, I doubt I would be exposed to anything that I haven't already or won't eventually find out on my own, since I'm so involved in the field. I'm now on track to graduate perhaps a year later than I had planned, which puts a significant financial strain on my family and my future self. My schedule continues to be bogged down with classes that are wholly unnecessary for me to take. I'm being re-introduced to subjects that I've covered a thousand times over, simply because I've always been interested in it all. And though I succeed in avoiding the cynical and immature tactic of failing to complete work out of some undeserved sense of superiority, I'm becoming increasingly disillusioned by the lack of intellectual stimulation. Further, my school requires students to complete a number of quarters of co-op work experience proportional to their major. My original major required two quarters, but my current requires three, delaying my graduation even more. To top it all off, college is putting a severe strain on my relationship with my very close partner of a few years, so I've searched diligently for co-op jobs in my area, alas to no avail. I'm now in my third year, and approaching that point past which I can no longer handle this. Either I keep my head down, get a degree no matter what it takes, and try to get a job with a company that will pay me enough to do what I love that I can eventually pay off my loans; or I cut my losses now, move wherever there is work, and in six months start paying off what debt I've accumulated thus far. So the real question is: is a university education really more than just a formality? It's a big decision, and one I can't make lightly. I think this is the appropriate venue for this kind of question, and I hope it sticks around for the sake of others who might someday find themselves in similar situations. My heartfelt thanks for reading, and in advance for your help.

    Read the article

  • Why is there no service-oriented language?

    - by Wolfgang
    Edit: To avoid further confusion: I am not talking about web services and such. I am talking about structuring applications internally, it's not about how computers communicate. It's about programming languages, compilers and how the imperative programming paradigm is extended. Original: In the imperative programming field, we saw two paradigms in the past 20 years (or more): object-oriented (OO), and service-oriented (SO) aka. component-based (CB). Both paradigms extend the imperative programming paradigm by introducing their own notion of modules. OO calls them objects (and classes) and lets them encapsulates both data (fields) and procedures (methods) together. SO, in contrast, separates data (records, beans, ...) from code (components, services). However, only OO has programming languages which natively support its paradigm: Smalltalk, C++, Java and all other JVM-compatibles, C# and all other .NET-compatibles, Python etc. SO has no such native language. It only comes into existence on top of procedural languages or OO languages: COM/DCOM (binary, C, C++), CORBA, EJB, Spring, Guice (all Java), ... These SO frameworks clearly suffer from the missing native language support of their concepts. They start using OO classes to represent services and records. This leads to designs where there is a clear distinction between classes that have methods only (services) and those that have fields only (records). Inheritance between services or records is then simulated by inheritance of classes. Technically, its not kept so strictly but in general programmers are adviced to make classes to play only one of the two roles. They use additional, external languages to represent the missing parts: IDL's, XML configurations, Annotations in Java code, or even embedded DSL like in Guice. This is especially needed, but not limited to, since the composition of services is not part of the service code itself. In OO, objects create other objects so there is no need for such facilities but for SO there is because services don't instantiate or configure other services. They establish an inner-platform effect on top of OO (early EJB, CORBA) where the programmer has to write all the code that is needed to "drive" SO. Classes represent only a part of the nature of a service and lots of classes have to be written to form a service together. All that boiler plate is necessary because there is no SO compiler which would do it for the programmer. This is just like some people did it in C for OO when there was no C++. You just pass the record which holds the data of the object as a first parameter to the procedure which is the method. In a OO language this parameter is implicit and the compiler produces all the code that we need for virtual functions etc. For SO, this is clearly missing. Especially the newer frameworks extensively use AOP or introspection to add the missing parts to a OO language. This doesn't bring the necessary language expressiveness but avoids the boiler platform code described in the previous point. Some frameworks use code generation to produce the boiler plate code. Configuration files in XML or annotations in OO code is the source of information for this. Not all of the phenomena that I mentioned above can be attributed to SO but I hope it clearly shows that there is a need for a SO language. Since this paradigm is so popular: why isn't there one? Or maybe there are some academic ones but at least the industry doesn't use one.

    Read the article

  • Domain authentication used for kerberos based authentication of users on my server

    - by J G
    Suppose a user process has authenticated itself against domain's directory server via kerberos, and then attempts opens a network socket to my server application. My server application has a white-list of users from the domain directory server. How does my server app authenticate the user from the directory based on this socket opening attempt? (To keep things simple - let's say my server is written in Java, and the directory server is Active Directory) EDIT My question is about how the client asks for an authentication token.

    Read the article

< Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >