Search Results

Search found 10989 results on 440 pages for 'wpf commands'.

Page 424/440 | < Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >

  • How to draw a filled envelop like a cone on OpenGL (using GLUT)?

    - by ashishsony
    Hi, I am relatively new to OpenGL programming...currently involved in a project that uses freeglut for opengl rendering... I need to draw an envelop looking like a cone (2D) that has to be filled with some color and some transparency applied. Is the freeglut toolkit equipped with such an inbuilt functionality to draw filled geometries(or some trick)?? or is there some other api that has an inbuilt support for filled up geometries.. Thanks. Best Regards. Edit1: just to clarify the 2D cone thing... the envelop is the graphical interpretation of the coverage area of an aircraft during interception(of an enemy aircraft)...that resembles a sector of a circle..i should have mentioned sector instead.. and glutSolidCone doesnot help me as i want to draw a filled sector of a circle...which i have already done...what remains to do is to fill it with some color... how to fill geometries with color in opengl?? Thanks. Edit2: Ok thanks for replying...all the answers posted to this questions can work for my problem in a way.. But i would definitely would want to know a way how to fill a geometry with some color. Say if i want to draw an envelop which is a parabola...in that case there would be no default glut function to actually draw a filled parabola(or is there any??).. So to generalise this question...how to draw a custom geometry in some solid color?? Thanks. Edit3: The answer that mstrobl posted works for GL_TRIANGLES but for such a code: glBegin(GL_LINE_STRIP); glColor3f(0.0, 0.0, 1.0); glVertex3f(0.0, 0.0, 0.0); glColor3f(0.0, 0.0, 1.0); glVertex3f(200.0, 0.0, 0.0); glColor3f(0.0, 0.0, 1.0); glVertex3f(200.0, 200.0, 0.0); glColor3f(0.0, 0.0, 1.0); glVertex3f(0.0, 200.0, 0.0); glColor3f(0.0, 0.0, 1.0); glVertex3f(0.0, 0.0, 0.0); glEnd(); which draws a square...only a wired square is drawn...i need to fill it with blue color. anyway to do it? if i put some drawing commands for a closed curve..like a pie..and i need to fill it with a color is there a way to make it possible... i dont know how its possible for GL_TRIANGLES... but how to do it for any closed curve?? Thanks.

    Read the article

  • Java Socket Connection is flooding network OR resulting in high ping

    - by user1461100
    i have a little problem with my java socket code. I'm writing an android client application which is sending data to a java multithreaded socket server on my pc through direct(!) wireless connection. It works fine but i want to improve it for mobile applications as it is very power consuming by now. When i remove two special lines in my code, the cpu usage of my mobile device (htc one x) is totally okay but then my connection seems to have high ping rates or something like that... Here is a server code snippet where i receive the clients data: while(true) { try { .... Object obj = in.readObject(); if(obj != null) { Class clazz = obj.getClass(); String className = clazz.getName(); if(className.equals("java.lang.String")) { String cmd = (String)obj; if(cmd.equals("dc")) { System.out.println("Client "+id+" disconnected!"); Server.connectedClients[id-1] = false; break; } if(cmd.substring(0,1).equals("!")) { robot.keyRelease(PlayerEnum.getKey(cmd,id)); } else { robot.keyPress(PlayerEnum.getKey(cmd,id)); } } } } catch .... Heres the client part, where i send my data in a while loop: private void networking() { try { if(client != null) { .... out.writeObject(sendQueue.poll()); .... } } catch .... when i write it this why, i send data everytime the while loop gets executed.. when sendQueue is empty, a null "Object" will be send. this results in "high" network traffic and in "high" cpu usage. BUT: all send comments are received nearly immediately. when i change the code to following: while(true) ... if(sendQueue.peek() != null) { out.writeObject(sendQueue.poll()); } ... the cpu usage is totally okay but i'm getting some laggs.. the commands do not arrive fast enough.. as i said, it works fine (besides cpu usage) if i'm sending data(with that null objects) every while execution. but i'm sure that this is very rough coding style because i'm kind of flooding the network. any hints? what am i doing wrong?? Thanks for your Help! Sincerly yours, maaft

    Read the article

  • Question about POP3 message termination octet

    - by user361633
    This is from the POP3 RFC. "Responses to certain commands are multi-line. In these cases, which are clearly indicated below, after sending the first line of the response and a CRLF, any additional lines are sent, each terminated by a CRLF pair. When all lines of the response have been sent, a final line is sent, consisting of a termination octet (decimal code 046, ".") and a CRLF pair. If any line of the multi-line response begins with the termination octet, the line is "byte-stuffed" by pre-pending the termination octet to that line of the response. Hence a multi-line response is terminated with the five octets "CRLF.CRLF". When examining a multi-line response, the client checks to see if the line begins with the termination octet. If so and if octets other than CRLF follow, the first octet of the line (the termination octet) is stripped away. If so and if CRLF immediately follows the termination character, then the response from the POP server is ended and the line containing ".CRLF" is not considered part of the multi-line response." Well, i have problem with this, for example gmail sometimes sends the termination octet and then in the NEXT LINE sends the CRLF pair. For example: "+OK blah blah" "blah blah." "\r\n" That's very rare, but it happens sometimes, so obviously i'm unable to determine the end of the message in such case, because i'm expecting a line that consists of '.\r\n'. Seriously, is Gmail violating the POP3 protocol or i'm doing something wrong? Also i have a second question, english is not my first language so i cannot understand that completely: "If any line of the multi-line response begins with the termination octet, the line is "byte-stuffed" by pre-pending the termination octet to that line of the response. Hence a multi-line response is terminated with the five octets "CRLF.CRLF"." When exactly CRLF.CRLF is used? Can someone gives me a simple example? The rfc says that is used when any line of the response begins with the termination octet. But i don't see any lines that starts with '.' in the messages that are terminated with CRLF.CRLF. I checked that. Maybe i don't understand something, that's why i'm asking.

    Read the article

  • Trouble using South with Django and Heroku

    - by Dan
    I had an existing Django project that I've just added South to. I ran syncdb locally. I ran manage.py schemamigration app_name locally I ran manage.py migrate app_name --fake locally I commit and pushed to heroku master I ran syncdb on heroku I ran manage.py schemamigration app_name on heroku I ran manage.py migrate app_name on heroku I then receive this: $ heroku run python notecard/manage.py migrate notecards Running python notecard/manage.py migrate notecards attached to terminal... up, run.1 Running migrations for notecards: - Migrating forwards to 0005_initial. > notecards:0003_initial Traceback (most recent call last): File "notecard/manage.py", line 14, in <module> execute_manager(settings) File "/app/lib/python2.7/site-packages/django/core/management/__init__.py", line 438, in execute_manager utility.execute() File "/app/lib/python2.7/site-packages/django/core/management/__init__.py", line 379, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/app/lib/python2.7/site-packages/django/core/management/base.py", line 191, in run_from_argv self.execute(*args, **options.__dict__) File "/app/lib/python2.7/site-packages/django/core/management/base.py", line 220, in execute output = self.handle(*args, **options) File "/app/lib/python2.7/site-packages/south/management/commands/migrate.py", line 105, in handle ignore_ghosts = ignore_ghosts, File "/app/lib/python2.7/site-packages/south/migration/__init__.py", line 191, in migrate_app success = migrator.migrate_many(target, workplan, database) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 221, in migrate_many result = migrator.__class__.migrate_many(migrator, target, migrations, database) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 292, in migrate_many result = self.migrate(migration, database) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 125, in migrate result = self.run(migration) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 99, in run return self.run_migration(migration) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 81, in run_migration migration_function() File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 57, in <lambda> return (lambda: direction(orm)) File "/app/notecard/notecards/migrations/0003_initial.py", line 15, in forwards ('user', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['auth.User'])), File "/app/lib/python2.7/site-packages/south/db/generic.py", line 226, in create_table ', '.join([col for col in columns if col]), File "/app/lib/python2.7/site-packages/south/db/generic.py", line 150, in execute cursor.execute(sql, params) File "/app/lib/python2.7/site-packages/django/db/backends/util.py", line 34, in execute return self.cursor.execute(sql, params) File "/app/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 44, in execute return self.cursor.execute(query, args) django.db.utils.DatabaseError: relation "notecards_semester" already exists I have 3 models. Section, Semester, and Notecards. I've added one field to the Notecards model and I cannot get it added on Heroku. Thank you.

    Read the article

  • UNIX pipes on C block on read

    - by Toni Cárdenas
    I'm struggling to implement a shell with pipelines for class. typedef struct { char** cmd; int in[2]; int out[2]; } cmdio; cmdio cmds[MAX_PIPE + 1]; Commands in the pipeline are read and stored in cmds. cmdio[i].in is the pair of file descriptors of the input pipe returned by pipe(). For the first command, which reads from terminal input, it is just {fileno(stdin), -1}. cmdin[i].outis similar for the output pipe/terminal output. cmdio[i].in is the same as cmd[i-1].out. For example: $ ls -l | sort | wc CMD: ls -l IN: 0 -1 OUT: 3 4 CMD: sort IN: 3 4 OUT: 5 6 CMD: wc IN: 5 6 OUT: -1 1 We pass each command to process_command, which does a number of things: for (cmdi = 0; cmds[cmdi].cmd != NULL; cmdi++) { process_command(&cmds[cmdi]); } Now, inside process_command: if (!(pid_fork = fork())) { dup2(cmd->in[0], fileno(stdin)); dup2(cmd->out[1], fileno(stdout)); if (cmd->in[1] >= 0) { if (close(cmd->in[1])) { perror(NULL); } } if (cmd->out[0] >= 0) { if (close(cmd->out[0])) { perror(NULL); } } execvp(cmd->cmd[0], cmd->cmd); exit(-1); } The problem is that reading from the pipe blocks forever: COMMAND $ ls | wc Created pipe, in: 5 out: 6 Foreground pid: 9042, command: ls, Exited, info: 0 [blocked running read() within wc] If, instead of exchanging the process with execvp, I just do this: if (!(pid_fork = fork())) { dup2(cmd->in[0], fileno(stdin)); dup2(cmd->out[1], fileno(stdout)); if (cmd->in[1] >= 0) { if (close(cmd->in[1])) { perror(NULL); } } if (cmd->out[0] >= 0) { if (close(cmd->out[0])) { perror(NULL); } } char buf[6]; read(fileno(stdin), buf, 5); buf[5] = '\0'; printf("%s\n", buf); exit(0); } It happens to work: COMMAND $ cmd1 | cmd2 | cmd3 | cmd4 | cmd5 Pipe creada, in: 11 out: 12 Pipe creada, in: 13 out: 14 Pipe creada, in: 15 out: 16 Pipe creada, in: 17 out: 18 hola! Foreground pid: 9251, command: cmd1, Exited, info: 0 Foreground pid: 9252, command: cmd2, Exited, info: 0 Foreground pid: 9253, command: cmd3, Exited, info: 0 Foreground pid: 9254, command: cmd4, Exited, info: 0 hola! Foreground pid: 9255, command: cmd5, Exited, info: 0 What could be the problem?

    Read the article

  • mod_rewrite not working for a specific directory

    - by punkish
    This has got me completely foxed for a couple of days now, and I am convinced that I will look stupid once I solve it, but will be even stupider if I don't ask for help now. I have mod_rewrite working successfully on my localhost (no vhosts involved; this is my laptop, my development machine), and I use .htaccess in various directories to help rewrite crufty URLs to clean ones. EXCEPT... it doesn't work in one directory. Since it is impossible to reproduce my entire laptop in this question, I provide the following details. In my httpd.conf, I have mod_rewrite.so loaded. LoadModule rewrite_module modules/mod_rewrite.so In my httpd.conf, I have included another conf file like so Include /usr/local/apache2/conf/other/punkish.conf In my punkish.conf, I have directories defined like so DocumentRoot "/Users/punkish/Sites" <Directory "/Users/punkish/Sites"> Options ExecCGI AllowOverride None Order allow,deny Allow from all </Directory> <Directory "/Users/punkish/Sites/one"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> <Directory "/Users/punkish/Sites/two"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> In ~/Sites/one I have the following .htaccess file RewriteEngine On RewriteBase /one/ # If an actual file or directory is requested, serve directly RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # Otherwise, pass everything through to the dispatcher RewriteRule ^(.*)$ index.cgi/$1 [L,QSA] and, everything works just fine. However, in my directory ~/Sites/two I have the following .htaccess file RewriteEngine On RewriteBase /two/ # If an actual file or directory is requested, serve directly RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # Otherwise, pass everything through to the dispatcher RewriteRule ^(.*)$ index.cgi/$1 [L,QSA] and, nothing works. Nada. Zip. Zilch. I just get a 404. I have determined that mod_rewrite is not even looking at my ~/Sites/two/.htaccess by putting spurious commands in it and not getting any error other than 404. Another confounding issue -- I have turned on RewriteLog in my httpd.conf with RewriteLogLevel 3, but my rewrite_log is completely empty. I know this is hard to trouble shoot unless sitting physically at the computer in question, but I hope someone can give me some indication as to what is going on. **Update: ** There are no aliases involved anywhere. This is my laptop, and everything is under the above stated Document Root, so I just access each directory as http://localhost/. Yes, typos are a big possibility (I did say that I will look stupid once I solve it, however, for now, I have not discovered a single typo anywhere, and yes, I have restarted Apache about a dozen times now. I even thought that perhaps I had two different Apaches running, but no, I have only one, the one under /usr/local/apache2, and I installed it myself a while back.

    Read the article

  • I need help converting a C# string from one character encoding to another?

    - by Handleman
    According to Spolsky I can't call myself a developer, so there is a lot of shame behind this question... Scenario: From a C# application, I would like to take a string value from a SQL db and use it as the name of a directory. I have a secure (SSL) FTP server on which I want to set the current directory using the string value from the DB. Problem: Everything is working fine until I hit a string value with a "special" character - I seem unable to encode the directory name correctly to satisfy the FTP server. The code example below uses "special" character é as an example uses WinSCP as an external application for the ftps comms does not show all the code required to setup the Process "_winscp". sends commands to the WinSCP exe by writing to the process standardinput for simplicity, does not get the info from the DB, but instead simply declares a string (but I did do a .Equals to confirm that the value from the DB is the same as the declared string) makes three attempts to set the current directory on the FTP server using different string encodings - all of which fail makes an attempt to set the directory using a string that was created from a hand-crafted byte array - which works Process _winscp = new Process(); byte[] buffer; string nameFromString = "Sinéad O'Connor"; _winscp.StandardInput.WriteLine("cd \"" + nameFromString + "\""); buffer = Encoding.UTF8.GetBytes(nameFromString); _winscp.StandardInput.WriteLine("cd \"" + Encoding.UTF8.GetString(buffer) + "\""); buffer = Encoding.ASCII.GetBytes(nameFromString); _winscp.StandardInput.WriteLine("cd \"" + Encoding.ASCII.GetString(buffer) + "\""); byte[] nameFromBytes = new byte[] { 83, 105, 110, 130, 97, 100, 32, 79, 39, 67, 111, 110, 110, 111, 114 }; _winscp.StandardInput.WriteLine("cd \"" + Encoding.Default.GetString(nameFromBytes) + "\""); The UTF8 encoding changes é to 101 (decimal) but the FTP server doesn't like it. The ASCII encoding changes é to 63 (decimal) but the FTP server doesn't like it. When I represent é as value 130 (decimal) the FTP server is happy, except I can't find a method that will do this for me (I had to manually contruct the string from explicit bytes). Anyone know what I should do to my string to encode the é as 130 and make the FTP server happy and finally elevate me to level 1 developer by explaining the only single thing a developer should understand?

    Read the article

  • Fast response on first Socket I/O request but slow every other time when communicating with remote serial port

    - by GreenGodot
    I'm using sockets to pass Serial commands to a remote device. And the response to that request is sent back and printed out. However, I am having a problem in that the first time it is instant but the rest of the time it can take up to 20 seconds to receive a reply. I think the problem is with my attempt at threading but I am not entirely sure. new Thread() { @Override public void run() { System.out.println("opened"); try { isSocketRetrieving.setText("Opening Socket"); socket = new Socket(getAddress(), getRemotePort())); DataOutput = new DataOutputStream(socket .getOutputStream()); inFromServer = new BufferedReader( new InputStreamReader(socket .getInputStream())); String line = ""; isSocketRetrieving.setText("Reading Stream......"); while ((line = inFromServer.readLine()) != null) { System.out.println(line); if (line.contains(getHandshakeRequest())) { DataOutput.write((getHandshakeResponse()toString() + "\r").getBytes()); DataOutput.flush(); DataOutput .write((getCommand().toString() + "\r").getBytes()); DataOutput.flush(); int pause = (line.length()*8*1000)/getBaud(); sleep(pause); } else if (line.contains(readingObject .getExpected())) { System.out.println(line); textArea.append("value = " + line + "\n"); textAreaScroll.revalidate(); System.out.println("Got Value"); break; } } System.out.println("Ended"); try { inFromServer.close(); DataOutput.close(); socket.close(); isSocketRetrieving.setText("Socket is inactive..."); rs232Table.addMouseListener(listener); interrupt(); join(); } catch (IOException e) { e.printStackTrace(); } catch (InterruptedException e) { System.out.println("Thread exited"); } } catch (NumberFormatException e1) { e1.printStackTrace(); } catch (UnknownHostException e1) { e1.printStackTrace(); } catch (IOException e1) { e1.printStackTrace(); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } }.start();

    Read the article

  • Dynamically loaded jQuery with GreaseMonkey inconsistent on pages (refreshing seems to fix it)... do

    - by uprightnetizen
    Hi, I want a custom page analysis footer on every site I visit... so I've used a method to attach JQuery to unsafeWindow. I then create a floating footer on the page. I want to be able to call commands in a menu, do some processing, then put the results in the footer. Unfortunately it sometimes works, sometimes it doesn't. At least two alerts should happen in the printOutput function. Sometimes it only fires one, then it (crashes?) without error? On other pages, both alerts fire and it finds the element, but it doesn't add the extra text. (e.g. www.linode.com) Refreshing the page, then running the printOutput command again seems to always work. Does anyone know what's going on??? The userscript can be installed at: http://www.captionwizard.com/test/page_analysis.user.js // ==UserScript== // @name page_analysis // @namespace markspace // @description Page Analysis // @include http://*/* // ==/UserScript== (function() { // Add jQuery var GM_JQ = document.createElement('script'); GM_JQ.src = 'http://code.jquery.com/jquery-1.4.2.min.js'; GM_JQ.type = 'text/javascript'; document.getElementsByTagName('head')[0].appendChild(GM_JQ); var jqueryActive = false; //Check if jQuery's loaded function GM_wait() { if(typeof unsafeWindow.jQuery == 'undefined') { window.setTimeout(GM_wait,100); } else { $ = unsafeWindow.jQuery; letsJQuery(); } } GM_wait(); function letsJQuery() { jqueryActive = true; setupOutputFooter(); } /******************************* Analysis FOOTER Functions ******************************/ function printOutput(someText) { alert('printing output'); if($('div.analysis_footer').length) { alert('is here - appending'); $('div.analysis_footer').append('<br>' + someText); } else { alert('not here - trying again'); setupOutputFooter(); $('div.analysis_footer').append('<br>' + someText); } } GM_registerMenuCommand("Test Output", testOutput, "k", "control", "k" ); function testOutput() { printOutput('testing this'); } function setupOutputFooter() { $('<div class="analysis_footer">Page Analysis Footer:</div>').appendTo('body'); $('div.analysis_footer').css('position','fixed').css('bottom', '0px').css('background-color','#F8F8F8'); $('div.analysis_footer').css('width','100%').css('color','#3B3B3B').css('font-size', '0.8em'); $('div.analysis_footer').css('font-family', '"Myriad",Verdana,Arial,Helvetica,sans-serif').css('padding', '5px'); $('div.analysis_footer').css('border-top', '1px solid black').css('text-align', 'left'); } }());

    Read the article

  • Incompatible library creating new project with Aptana

    - by Phil Rice
    I am a ruby and rails newbie, so my abilities to debug this are somewhat limited. I have just added the eclipse plugin which failed, then downloaded the latest aptana studio which also failed. The failure was the same in both cases. The nature of the failure is that when I create a new rails project, I get an error message about an incompatible library version "C:/Ruby193/lib/ruby/gems/1.9.1/gems/mongrel-1.1.5-x86-mswin32-60/lib/http11.so". The project is actually created, along with directories and files. Google searches around this error message have only returned a couple of hits, which were not very helpful I am wondering if this is about 64 bit libraries. My software stack is: Windows 7 home premium 64bit Aptana RadRails, build: 2.0.5.1278709071 Ruby1.9.3 gem 1.8.24 The console shows: "4320" C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require': iconv will be deprecated in the future, use String#encode instead. C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require': incompatible library version - C:/Ruby193/lib/ruby/gems/1.9.1/gems/mongrel-1.1.5-x86-mswin32-60/lib/http11.so (LoadError) from C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:156:in `block in require' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:521:in `new_constants_in' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:156:in `require' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/mongrel-1.1.5-x86-mswin32-60/lib/mongrel.rb:12:in `<top (required)>' from C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:60:in `require' from C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:60:in `rescue in require' from C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:35:in `require' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:156:in `block in require' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:521:in `new_constants_in' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:156:in `require' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/rack-1.0.0/lib/rack/handler/mongrel.rb:1:in `<top (required)>' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/rack-1.0.0/lib/rack/handler.rb:17:in `const_get' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/rack-1.0.0/lib/rack/handler.rb:17:in `block in get' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/rack-1.0.0/lib/rack/handler.rb:17:in `each' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/rack-1.0.0/lib/rack/handler.rb:17:in `get' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/rails-2.3.4/lib/commands/server.rb:45:in `<top (required)>' from C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from script/server:3:in `<top (required)>' from -e:2:in `load' from -e:2:in `<main>'

    Read the article

  • Why do I get a WCF timeout even though my service call and callback are successful?

    - by KallDrexx
    I'm playing around with hooking up an in-game console to a WCF interface, so an external application can send console commands and receive console output. To accomplish this I created the following service contracts: public interface IConsoleNetworkCallbacks { [OperationContract(IsOneWay = true)] void NewOutput(IEnumerable<string> text, string category); } [ServiceContract(SessionMode = SessionMode.Required, CallbackContract = typeof(IConsoleNetworkCallbacks))] public interface IConsoleInterface { [OperationContract] void ProcessInput(string input); [OperationContract] void ChangeCategory(string category); } On the server I implemented it with: public class ConsoleNetworkInterface : IConsoleInterface, IDisposable { public ConsoleNetworkInterface() { ConsoleManager.Instance.RegisterOutputUpdateHandler(OutputHandler); } public void Dispose() { ConsoleManager.Instance.UnregisterOutputHandler(OutputHandler); } public void ProcessInput(string input) { ConsoleManager.Instance.ProcessInput(input); } public void ChangeCategory(string category) { ConsoleManager.Instance.UnregisterOutputHandler(OutputHandler); ConsoleManager.Instance.RegisterOutputUpdateHandler(OutputHandler, category); } protected void OutputHandler(IEnumerable<string> text, string category) { var callbacks = OperationContext.Current.GetCallbackChannel<IConsoleNetworkCallbacks>(); callbacks.NewOutput(text, category); } } On the client I implemented the callback with: public class Callbacks : IConsoleNetworkCallbacks { public void NewOutput(IEnumerable<string> text, string category) { MessageBox.Show(string.Format("{0} lines received for '{1}' category", text.Count(), category)); } } Finally, I establish the service host with the following class: public class ConsoleServiceHost : IDisposable { protected ServiceHost _host; public ConsoleServiceHost() { _host = new ServiceHost(typeof(ConsoleNetworkInterface), new Uri[] { new Uri("net.pipe://localhost") }); _host.AddServiceEndpoint(typeof(IConsoleInterface), new NetNamedPipeBinding(), "FrbConsolePipe"); _host.Open(); } public void Dispose() { _host.Close(); } } and use the following code on my client to establish the connection: protected Callbacks _callbacks; protected IConsoleInterface _proxy; protected void ConnectToConsoleServer() { _callbacks = new Callbacks(); var factory = new DuplexChannelFactory<IConsoleInterface>(_callbacks, new NetNamedPipeBinding(), new EndpointAddress("net.pipe://localhost/FrbConsolePipe")); _proxy = factory.CreateChannel(); _proxy.ProcessInput("Connected"); } So what happens is that my ConnectToConsoleServer() is called and then it gets all the way to _proxy.ProcessInput("Connected");. In my game (on the server) I immediately see the output caused by the ProcessInput call, but the client is still stalled on the _proxy.ProcessInput() call. After a minute my client gets a JIT TimeoutException however at the same time my MessageBox message appears. So obviously not only is my command being sent immediately, my callback is being correctly called. So why am I getting a timeout exception? Note: Even removing the MessageBox call, I still have this issue, so it's not an issue of the GUI blocking the callback response.

    Read the article

  • How do I make a GUI that behaves like this?

    - by Karl Knechtel
    This is difficult to explain without illustration, so - behold, an illustration, cobbled together from screenshots of a few hello-world examples and a lot of Paint work: I have started out using Windows Forms on .NET (via IronPython, but that shouldn't be important), and haven't been able to figure out very much. GUI libraries in general are very intimidating, simply because every class has so many possible attributes. Documentation is good at explaining what everything does, but not so good at helping you figure out what you need. I will be assembling the GUI dynamically, but I'm not expecting that to be the hard part. The sticking points for me right now are: How do I get text labels to size themselves automatically to the width of the contained text (so that the text doesn't clip, and I also don't reserve unnecessary space for them when resizing the window)? How do I make the vertical scrollbar always appear? Setting the VScroll property (why is this protected when AutoScroll is public, BTW?) doesn't seem to do anything. How come the horizontal scrollbar is not added by AutoScroll when contents are laid out vertically (via Dock = DockStyle.Top)? I can use a minimum size for panels to prevent the label and corresponding control from overlapping when the window is shrunk horizontally, but then the scrollbar doesn't appear and the control is inaccessible. How can I put limits on window resizing (e.g. set a minimum width) without disabling it completely? (Just set minimum/maximum sizes for the Form?) Related to that, is there any way to set minimum/maximum widths or heights without setting a minimum/maximum size (i.e. can I constrain the size in only one dimension)? Is there a built-in control suitable for hex editing or am I going to have to build something myself? ... And should I be using something else (perhaps something more capable?) I've heard WPF mentioned, but I understand that this involves XML and I really want to build a GUI from XML - I already have data in an object graph, and doing some kind of weird XML pseudo-serialization (in Python, no less!) in order to create a GUI seems incredibly roundabout.

    Read the article

  • Which MS technologies would be suited for a data intensive application?

    - by steve.tse
    I'm a junior VB.net developer with little application design knowledge. I've been reading a lot of material online regarding different design patterns, frameworks, and methodologies. It's become a bit confusing for me. Right now I'm trying to decide on what language would be best suited to convert an existing VB6 application (with SQL server backend.) I need to update the UI and add more user functionality and reporting capabilities. Initially I was thinking of using WPF and attempting the MVVM model for this big project. Reports would be generated from SSRS. A peer suggested using ASP.net and I don't have enough experience to determine what would be better. The senior programmers here are stuck on using VB6 and don't have any input on what to use. They are encouraging me to use the latest technologies. This application would be for ~20 users in a central location. Ideally I would stick to a Microsoft .net language. Current interface is similar to a datagrid table where the user would click in to see the detail of each record. They would need to have multiple records open at any given time. I look forward to all the advice I can get. EDIT 2010/04/22 2:47 PM EST What is your audience? Internal clients within an intranet How complex are the interactions you expect to implement? not very... displaying data from SQL server to UI. Allow user updates to said data. Typically just one user modifying a record. Do you require near real-time data updates? no How often do you expect to update the application after the first release? twice/year Do you expect a well-defined set of client platforms? Yes, windows xp environment, potentially upgrading to Win7. Currently in IE.6 moving to IE7 or 8 within a couple of months. Do users need access from anywhere? No, just from their PC.

    Read the article

  • Device drivers and Windows

    - by b-gen-jack-o-neill
    Hi, I am trying to complete the picture of how the PC and the OS interacts together. And I am at point, where I am little out of guess when it comes to device drivers. Please, don´t write things like its too complicated, or you don´t need to know when using high programming laguage and winapi functions. I want to know, it´s for study purposes. So, the very basic structure of how OS and PC (by PC I mean of course HW) is how I see it is that all other than direct CPU commands, which can CPU do on itself (arithmetic operation, its registers access and memory access) must pass thru OS. Mainly becouse from ring level 3 you cannot use in and out intructions which are used for acesing other HW. I know that there is MMIO,but it must be set by port comunication first. It was not like this all the time. Even I am bit young to remember MSDOS, I know you could access HW directly, becouse there ws no limitation, no ring mode. So you could to write string to diplay use wheather DOS function, or directly acess video card memory and write it by yourself. But as OS developed, there is no longer this possibility. But it is fine, since OS now handles all the HW comunication, and frankly it more convinient and much more safe (I would say the only option) in multitasking environment. So nowdays you instead of using int instructions to use BIOS mapped function or DOS function you call dll which internally than handles everything you don´t need to know about. I understand this. I also undrstand that device drivers is the piece of code that runs in ring level 0, so it can do all the HW interactions. But what I don´t understand is connection between OS and device driver. Let´s take a example - I want to make a sound card make a sound. So I call windows API to acess sound card, but what happens than? Does windows call device drivers to do so? But if it does call device driver, does it mean, that all device drivers which can be called by winAPI function, must have routines named in some specific way? I mean, when I have new sound card, must its drivers have functions named same as the old one? So Windows can actually call the same function from its perspective? But if Windows have predefined sets of functions requored by device drivers, that it cannot use new drivers that doesent existed before last version of OS came out. Please, help me understand this mess. I am really getting mad. Thanks.

    Read the article

  • Dynamic swappable Data Access Layer

    - by Andy
    I'm writing a data driven WPF client. The client will typically pull data from a WCF service, which queries a SQL db, but I'd like the option to pull the data directly from SQL or other arbitrary data sources. I've come up with this design and would like to hear your opinion on whether it is the best design. First, we have some data object we'd like to extract from SQL. // The Data Object with a single property public class Customer { private string m_Name = string.Empty; public string Name { get { return m_Name; } set { m_Name = value;} } } Then I plan on using an interface which all data access layers should implement. Suppose one could also use an abstract class. Thoughts? // The interface with a single method interface ICustomerFacade { List<Customer> GetAll(); } One can create a SQL implementation. // Sql Implementation public class SqlCustomrFacade : ICustomerFacade { public List<Customer> GetAll() { // Query SQL db and return something useful // ... return new List<Customer>(); } } We can also create a WCF implementation. The problem with WCF is is that it doesn't use the same data object. It creates its own local version, so we would have to copy the details over somehow. I suppose one could use reflection to copy the values of similar fields across. Thoughts? // Wcf Implementation public class WcfCustomrFacade : ICustomerFacade { public List<Customer> GetAll() { // Get date from the Wcf Service (not defined here) List<WcfService.Customer> wcfCustomers = wcfService.GetAllCustomers(); // The list we're going to return List<Customer> customers = new List<Customer>(); // This is horrible foreach(WcfService.Customer wcfCustomer in wcfCustomers) { Customer customer = new Customer(); customer.Name = wcfCustomer.Name; customers.Add(customer); } return customers; } } I also plan on using a factory to decide which facade to use. // Factory pattern public class FacadeFactory() { public static ICustomerFacade CreateCustomerFacade() { // Determine the facade to use if (ConfigurationManager.AppSettings["DAL"] == "Sql") return new SqlCustomrFacade(); else return new WcfCustomrFacade(); } } This is how the DAL would typically be used. // Test application public class MyApp { public static void Main() { ICustomerFacade cf = FacadeFactory.CreateCustomerFacade(); cf.GetAll(); } } I appreciate your thoughts and time.

    Read the article

  • ODBC: when is the best time to create my database?

    - by mawg
    I have a windows program which generates PGP forms which will be filled in later. Those PHP forms will populate a database. It looks very much like MySql, but I can't be certain, so let's call it ODBC. And, yes, it does have to be a windows program. There will also be PHP forms which query the database - examine which tables and fields it contains and then generates forms which can be used to search the database (e.g, it finds a table with fields "employee_name", etc and generates a form which lets you search based on employee name. Let's call that design time and run time. At design time, some manager or IT guy or similar gets to define the nature of the database and at runtime 1) a worker fills in the form daily and 2) management can extract reports. Here's my question: given that the database is defined at "design time" (and populated at run time), where and how is best to do so? 1 I could use an ODBC interface from the windows program, but I am having difficulty finding something good to work with Delphi. Things like ADO and firebird tend to expect you to already have a database and allow you to manipulate it, but I can find no code example of how to create a database and some tables, so ... 2 I could used DOS commands from Delphi in my windows program. I just tried and got a response to MySql --version, but am not sure if MySql etc are more interactive. That is, can I use a script file or a very long stacked command with semicolons and returns separating? e.g 'CREATE DATABASE db; CREATE TABLE t1;' 3) Since the best way to work with databases seems to be PHP, perhaps my windows program could spit out a PHP page which would, when run in a browser, create the database. I have tried to make this as uncomplicated as I can, but please feel free to ask questions. It may be that there are several valid ways, but there is probably one 'better' solution in terms of ease of implementation or maintenance. Better scratch option 3. What if the user later wants to come back and have the windows program change the input form? It needs to update the database too.

    Read the article

  • How to properly mix generics and inheritance to get the desired result?

    - by yamsha
    My question is not easy to explain using words, fortunately it's not too difficult to demonstrate. So, bear with me: public interface Command<R> { public R execute();//parameter R is the type of object that will be returned as the result of the execution of this command } public abstract class BasicCommand<R> { } public interface CommandProcessor<C extends Command<?>> { public <R> R process(C<R> command);//this is my question... it's illegal to do, but you understand the idea behind it, right? } //constrain BasicCommandProcessor to commands that subclass BasicCommand public class BasicCommandProcessor implements CommandProcessor<C extends BasicCommand<?>> { //here, only subclasses of BasicCommand should be allowed as arguments but these //BasicCommand object should be parameterized by R, like so: BasicCommand<R> //so the method signature should really be // public <R> R process(BasicCommand<R> command) //which would break the inheritance if the interface's method signature was instead: // public <R> R process(Command<R> command); //I really hope this fully illustrates my conundrum public <R> R process(C<R> command) { return command.execute(); } } public class CommandContext { public static void main(String... args) { BasicCommandProcessor bcp = new BasicCommandProcessor(); String textResult = bcp.execute(new BasicCommand<String>() { public String execute() { return "result"; } }); Long numericResult = bcp.execute(new BasicCommand<Long>() { public Long execute() { return 123L; } }); } } Basically, I want the generic "process" method to dictate the type of generic parameter of the Command object. The goal is to be able to restrict different implementations of CommandProcessor to certain classes that implement Command interface and at the same time to able to call the process method of any class that implements the CommandProcessor interface and have it return the object of type specified by the parametarized Command object. I'm not sure if my explanation is clear enough, so please let me know if further explanation is needed. I guess, the question is "Would this be possible to do, at all?" If the answer is "No" what would be the best work-around (I thought of a couple on my own, but I'd like some fresh ideas)

    Read the article

  • sigwait in Linux (Fedora 13) vs OS X

    - by Silas
    So I'm trying to create a signal handler using pthreads which works on both OS X and Linux. The code below works on OS X but doesn't work on Fedora 13. The application is fairly simple. It spawns a pthread, registers SIGHUP and waits for a signal. After spawning the signal handler I block SIGHUP in the main thread so the signal should only be sent to the signal_handler thread. On OS X this works fine, if I compile, run and send SIGHUP to the process it prints "Got SIGHUP". On Linux it just kills the process (and prints Hangup). If I comment out the signal_handler pthread_create the application doesn't die. I know the application gets to the sigwait and blocks but instead of return the signal code it just kills the application. I ran the test using the following commands: g++ test.cc -lpthread -o test ./test & PID="$!" sleep 1 kill -1 "$PID" test.cc #include <pthread.h> #include <signal.h> #include <iostream> using namespace std; void *signal_handler(void *arg) { int sig; sigset_t set; sigemptyset(&set); sigaddset(&set, SIGHUP); while (true) { cout << "Wait for signal" << endl; sigwait(&set, &sig); if (sig == SIGHUP) { cout << "Got SIGHUP" << endl; } } } int main() { pthread_t handler; sigset_t set; // Create signal handler pthread_create(&handler, NULL, signal_handler, NULL); // Ignore SIGHUP in main thread sigfillset(&set); sigaddset(&set, SIGHUP); pthread_sigmask(SIG_BLOCK, &set, NULL); for (int i = 1; i < 5; i++) { cout << "Sleeping..." << endl; sleep(1); } pthread_join(handler, NULL); return 0; }

    Read the article

  • Ajax Control Toolkit December 2013 Release

    - by Stephen.Walther
    Today, we released a new version of the Ajax Control Toolkit that contains several important bug fixes and new features. The new release contains a new Tabs control that has been entirely rewritten in jQuery. You can download the December 2013 release of the Ajax Control Toolkit at http://Ajax.CodePlex.com. Alternatively, you can install the latest version directly from NuGet: The Ajax Control Toolkit and jQuery The Ajax Control Toolkit now contains two controls written with jQuery: the ToggleButton control and the Tabs control.  The goal is to rewrite the Ajax Control Toolkit to use jQuery instead of the Microsoft Ajax Library gradually over time. The motivation for rewriting the controls in the Ajax Control Toolkit to use jQuery is to modernize the toolkit. We want to continue to accept new controls written for the Ajax Control Toolkit contributed by the community. The community wants to use jQuery. We want to make it easy for the community to submit bug fixes. The community understands jQuery. Using the Ajax Control Toolkit with a Website that Already uses jQuery But what if you are already using jQuery in your website?  Will adding the Ajax Control Toolkit to your website break your existing website?  No, and here is why. The Ajax Control Toolkit uses jQuery.noConflict() to avoid conflicting with an existing version of jQuery in a page.  The version of jQuery that the Ajax Control Toolkit uses is represented by a variable named actJQuery.  You can use actJQuery side-by-side with an existing version of jQuery in a page without conflict.Imagine, for example, that you add jQuery to an ASP.NET page using a <script> tag like this: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="WebForm1.aspx.cs" Inherits="TestACTDec2013.WebForm1" %> <!DOCTYPE html> <html > <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <script src="Scripts/jquery-2.0.3.min.js"></script> <ajaxToolkit:ToolkitScriptManager runat="server" /> <ajaxToolkit:TabContainer runat="server"> <ajaxToolkit:TabPanel runat="server"> <HeaderTemplate> Tab 1 </HeaderTemplate> <ContentTemplate> <h1>First Tab</h1> </ContentTemplate> </ajaxToolkit:TabPanel> <ajaxToolkit:TabPanel runat="server"> <HeaderTemplate> Tab 2 </HeaderTemplate> <ContentTemplate> <h1>Second Tab</h1> </ContentTemplate> </ajaxToolkit:TabPanel> </ajaxToolkit:TabContainer> </div> </form> </body> </html> The page above uses the Ajax Control Toolkit Tabs control (TabContainer and TabPanel controls).  The Tabs control uses the version of jQuery that is currently bundled with the Ajax Control Toolkit (jQuery version 1.9.1). The page above also includes a <script> tag that references jQuery version 2.0.3.  You might need that particular version of jQuery, for example, to use a particular jQuery plugin. The two versions of jQuery in the page do not create a conflict. This fact can be demonstrated by entering the following two commands in the JavaScript console window: actJQuery.fn.jquery $.fn.jquery Typing actJQuery.fn.jquery will display the version of jQuery used by the Ajax Control Toolkit and typing $.fn.jquery (or jQuery.fn.jquery) will show the version of jQuery used by other jQuery plugins in the page.      Preventing jQuery from Loading Twice So by default, the Ajax Control Toolkit will not conflict with any existing version of jQuery used in your application. However, this does mean that if you are already using jQuery in your application then jQuery will be loaded twice. For performance reasons, you might want to avoid loading the jQuery library twice. By taking advantage of the <remove> element in the AjaxControlToolkit.config file, you can prevent the Ajax Control Toolkit from loading its version of jQuery. <ajaxControlToolkit> <scripts> <remove name="jQuery.jQuery.js" /> </scripts> <controlBundles> <controlBundle> <control name="TabContainer" /> <control name="TabPanel" /> </controlBundle> </controlBundles> </ajaxControlToolkit> Be careful here:  the name of the script being removed – jQuery.jQuery.js – is case-sensitive. If you remove jQuery then it is your responsibility to add the exact same version of jQuery back into your application.  You can add jQuery back using a <script> tag like this: <script src="Scripts/jquery-1.9.1.min.js"></script>     Make sure that you add the <script> tag before the server-side <form> tag or the Ajax Control Toolkit won’t detect the presence of jQuery. Alternatively, you can use the ToolkitScriptManager like this: <ajaxToolkit:ToolkitScriptManager runat="server"> <Scripts> <asp:ScriptReference Name="jQuery.jQuery.js" /> </Scripts> </ajaxToolkit:ToolkitScriptManager> The Ajax Control Toolkit is tested against the particular version of jQuery that is bundled with the Ajax Control Toolkit. Currently, the Ajax Control Toolkit uses jQuery version 1.9.1. If you attempt to use a different version of jQuery with the Ajax Control Toolkit then you will get the exception jQuery 1.9.1 is required in your JavaScript console window: If you need to use a different version of jQuery in the same page as the Ajax Control Toolkit then you should not use the <remove> element. Instead, allow the Ajax Control Toolkit to load its version of jQuery side-by-side with the other version of jQuery. Lots of Bug Fixes As usual, we implemented several important bug fixes with this release. The bug fixes concerned the following three controls: Tabs control – In the course of rewriting the Tabs control to use jQuery, we fixed several bugs related to the Tabs control. AjaxFileUpload control – We resolved an issue concerning the AjaxFileUpload and the TMP directory. HTMLEditor control – We updated the HTMLEditor control to use the new Ajax Control Toolkit bundling and minification framework. Summary I would like to thank the Superexpert team for their hard work on this release. Many long hours of coding and testing went into making this release possible.

    Read the article

  • apt-get install and update fail

    - by sepehr
    I've got a problem with apt-get update and apt-get install ... commands . every time update or installing fails and errors are : Get:1 http://dl.google.com stable Release.gpg [198B] Ign http://dl.google.com/linux/chrome/deb/ stable/main Translation-en_US Get:2 http://dl.google.com stable Release [1,347B] Get:3 http://dl.google.com stable/main Packages [1,227B] Err http://32.repository.backtrack-linux.org revolution Release.gpg Could not connect to 32.repository.backtrack-linux.org:80 (37.221.173.214). - connect (110: Connection timed out) Err http://32.repository.backtrack-linux.org/ revolution/main Translation-en_US Unable to connect to 32.repository.backtrack-linux.org:http: Err http://32.repository.backtrack-linux.org/ revolution/microverse Translation-en_US Unable to connect to 32.repository.backtrack-linux.org:http: Err http://32.repository.backtrack-linux.org/ revolution/non-free Translation-en_US Unable to connect to 32.repository.backtrack-linux.org:http: Err http://32.repository.backtrack-linux.org/ revolution/testing Translation-en_US Unable to connect to 32.repository.backtrack-linux.org:http: Err http://all.repository.backtrack-linux.org revolution Release.gpg Could not connect to all.repository.backtrack-linux.org:80 (37.221.173.214). - connect (110: Connection timed out) Err http://all.repository.backtrack-linux.org/ revolution/main Translation-en_US Unable to connect to all.repository.backtrack-linux.org:http: Err http://all.repository.backtrack-linux.org/ revolution/microverse Translation-en_US Unable to connect to all.repository.backtrack-linux.org:http: Err http://all.repository.backtrack-linux.org/ revolution/non-free Translation-en_US Unable to connect to all.repository.backtrack-linux.org:http: Err http://all.repository.backtrack-linux.org/ revolution/testing Translation-en_US Unable to connect to all.repository.backtrack-linux.org:http: Ign http://32.repository.backtrack-linux.org revolution Release Ign http://all.repository.backtrack-linux.org revolution Release Ign http://32.repository.backtrack-linux.org revolution/main Packages Ign http://all.repository.backtrack-linux.org revolution/main Packages Ign http://32.repository.backtrack-linux.org revolution/microverse Packages Ign http://32.repository.backtrack-linux.org revolution/non-free Packages Ign http://32.repository.backtrack-linux.org revolution/testing Packages Ign http://all.repository.backtrack-linux.org revolution/microverse Packages Ign http://all.repository.backtrack-linux.org revolution/non-free Packages Ign http://all.repository.backtrack-linux.org revolution/testing Packages Ign http://32.repository.backtrack-linux.org revolution/main Packages Ign http://32.repository.backtrack-linux.org revolution/microverse Packages Ign http://32.repository.backtrack-linux.org revolution/non-free Packages Ign http://all.repository.backtrack-linux.org revolution/main Packages Ign http://all.repository.backtrack-linux.org revolution/microverse Packages Ign http://all.repository.backtrack-linux.org revolution/non-free Packages Ign http://all.repository.backtrack-linux.org revolution/testing Packages Err http://all.repository.backtrack-linux.org revolution/main Packages Unable to connect to all.repository.backtrack-linux.org:http: Err http://all.repository.backtrack-linux.org revolution/microverse Packages Unable to connect to all.repository.backtrack-linux.org:http: Ign http://32.repository.backtrack-linux.org revolution/testing Packages Err http://32.repository.backtrack-linux.org revolution/main Packages Unable to connect to 32.repository.backtrack-linux.org:http: Err http://32.repository.backtrack-linux.org revolution/microverse Packages Unable to connect to 32.repository.backtrack-linux.org:http: Err http://all.repository.backtrack-linux.org revolution/non-free Packages Unable to connect to all.repository.backtrack-linux.org:http: Err http://all.repository.backtrack-linux.org revolution/testing Packages Unable to connect to all.repository.backtrack-linux.org:http: Err http://32.repository.backtrack-linux.org revolution/non-free Packages Unable to connect to 32.repository.backtrack-linux.org:http: Err http://32.repository.backtrack-linux.org revolution/testing Packages Unable to connect to 32.repository.backtrack-linux.org:http: Err http://source.repository.backtrack-linux.org revolution Release.gpg Could not connect to source.repository.backtrack-linux.org:80 (37.221.173.214). - connect (110: Connection timed out) Err http://source.repository.backtrack-linux.org/ revolution/main Translation-en_US Unable to connect to source.repository.backtrack-linux.org:http: Err http://source.repository.backtrack-linux.org/ revolution/microverse Translation-en_US Unable to connect to source.repository.backtrack-linux.org:http: Err http://source.repository.backtrack-linux.org/ revolution/non-free Translation-en_US Unable to connect to source.repository.backtrack-linux.org:http: Err http://source.repository.backtrack-linux.org/ revolution/testing Translation-en_US Unable to connect to source.repository.backtrack-linux.org:http: Ign http://source.repository.backtrack-linux.org revolution Release Ign http://source.repository.backtrack-linux.org revolution/main Packages Ign http://source.repository.backtrack-linux.org revolution/microverse Packages Ign http://source.repository.backtrack-linux.org revolution/non-free Packages Ign http://source.repository.backtrack-linux.org revolution/testing Packages Ign http://source.repository.backtrack-linux.org revolution/main Packages Ign http://source.repository.backtrack-linux.org revolution/microverse Packages Ign http://source.repository.backtrack-linux.org revolution/non-free Packages Ign http://source.repository.backtrack-linux.org revolution/testing Packages Err http://source.repository.backtrack-linux.org revolution/main Packages Unable to connect to source.repository.backtrack-linux.org:http: Err http://source.repository.backtrack-linux.org revolution/microverse Packages Unable to connect to source.repository.backtrack-linux.org:http: Err http://source.repository.backtrack-linux.org revolution/non-free Packages Unable to connect to source.repository.backtrack-linux.org:http: Err http://source.repository.backtrack-linux.org revolution/testing Packages Unable to connect to source.repository.backtrack-linux.org:http: Fetched 2,772B in 1min 3s (44B/s) W: Failed to fetch http://all.repository.backtrack- \linux.org/dists/revolution/Release.gpg Could not connect to all.repository.backtrack-linux.org:80 (37.221.173.214). - connect (110: Connection timed out) W: Failed to fetch http://all.repository.backtrack-linux.org/dists/revolution/main/i18n/Translation-en_US.bz2 Unable to connect to all.repository.backtrack-linux.org:http: W: Failed to fetch http://all.repository.backtrack-linux.org/dists/revolution/microverse/i18n/Translation-en_US.bz2 Unable to connect to all.repository.backtrack-linux.org:http: W: Failed to fetch http://all.repository.backtrack-linux.org/dists/revolution/non-free/i18n/Translation-en_US.bz2 Unable to connect to all.repository.backtrack-linux.org:http: W: Failed to fetch http://all.repository.backtrack-linux.org/dists/revolution/testing/i18n/Translation-en_US.bz2 Unable to connect to all.repository.backtrack-linux.org:http: W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/Release.gpg Could not connect to 32.repository.backtrack-linux.org:80 (37.221.173.214). - connect (110: Connection timed out) W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/main/i18n/Translation-en_US.bz2 Unable to connect to 32.repository.backtrack-linux.org:http: W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/microverse/i18n/Translation-en_US.bz2 Unable to connect to 32.repository.backtrack-linux.org:http: W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/non-free/i18n/Translation-en_US.bz2 Unable to connect to 32.repository.backtrack-linux.org:http: W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/testing/i18n/Translation-en_US.bz2 Unable to connect to 32.repository.backtrack-linux.org:http: W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/Release.gpg Could not connect to source.repository.backtrack-linux.org:80 (37.221.173.214). - connect (110: Connection timed out) W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/main/i18n/Translation-en_US.bz2 Unable to connect to source.repository.backtrack-linux.org:http: W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/microverse/i18n/Translation-en_US.bz2 Unable to connect to source.repository.backtrack-linux.org:http: W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/non-free/i18n/Translation-en_US.bz2 Unable to connect to source.repository.backtrack-linux.org:http: W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/testing/i18n/Translation-en_US.bz2 Unable to connect to source.repository.backtrack-linux.org:http: W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/main/binary-i386/Packages.gz Unable to connect to 32.repository.backtrack-linux.org:http: W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/microverse/binary-i386/Packages.gz Unable to connect to 32.repository.backtrack-linux.org:http: W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/non-free/binary-i386/Packages.gz Unable to connect to 32.repository.backtrack-linux.org:http: W: Failed to fetch http://all.repository.backtrack-linux.org/dists/revolution/main/binary-i386/Packages.gz Unable to connect to all.repository.backtrack-linux.org:http: W: Failed to fetch http://all.repository.backtrack-linux.org/dists/revolution/microverse/binary-i386/Packages.gz Unable to connect to all.repository.backtrack-linux.org:http: W: Failed to fetch http://all.repository.backtrack-linux.org/dists/revolution/non-free/binary-i386/Packages.gz Unable to connect to all.repository.backtrack-linux.org:http: W: Failed to fetch http://all.repository.backtrack-linux.org/dists/revolution/testing/binary-i386/Packages.gz Unable to connect to all.repository.backtrack-linux.org:http: W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/testing/binary-i386/Packages.gz Unable to connect to 32.repository.backtrack-linux.org:http: W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/main/binary-i386/Packages.gz Unable to connect to source.repository.backtrack-linux.org:http: W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/microverse/binary-i386/Packages.gz Unable to connect to source.repository.backtrack-linux.org:http: W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/non-free/binary-i386/Packages.gz Unable to connect to source.repository.backtrack-linux.org:http: W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/testing/binary-i386/Packages.gz Unable to connect to source.repository.backtrack-linux.org:http: E: Some index files failed to download, they have been ignored, or old ones used instead. I Don't know how to get out of this ! I want to install RPM and YUM package on my backtrack ! I also searched over internet for answer . in backtrack forums or any other sites or weblogs i could'nt find a good answer ! can anyone help ??

    Read the article

  • Scan a Windows PC for Viruses from a Ubuntu Live CD

    - by Trevor Bekolay
    Getting a virus is bad. Getting a virus that causes your computer to crash when you reboot is even worse. We’ll show you how to clean viruses from your computer even if you can’t boot into Windows by using a virus scanner in a Ubuntu Live CD. There are a number of virus scanners available for Ubuntu, but we’ve found that avast! is the best choice, with great detection rates and usability. Unfortunately, avast! does not have a proper 64-bit version, and forcing the install does not work properly. If you want to use avast! to scan for viruses, then ensure that you have a 32-bit Ubuntu Live CD. If you currently have a 64-bit Ubuntu Live CD on a bootable flash drive, it does not take long to wipe your flash drive and go through our guide again and select normal (32-bit) Ubuntu 9.10 instead of the x64 edition. For the purposes of fixing your Windows installation, the 64-bit Live CD will not provide any benefits. Once Ubuntu 9.10 boots up, open up Firefox by clicking on its icon in the top panel. Navigate to http://www.avast.com/linux-home-edition. Click on the Download tab, and then click on the link to download the DEB package. Save it to the default location. While avast! is downloading, click on the link to the registration form on the download page. Fill in the registration form if you do not already have a trial license for avast!. By the time you’ve filled out the registration form, avast! will hopefully be finished downloading. Open a terminal window by clicking on Applications in the top-left corner of the screen, then expanding the Accessories menu and clicking on Terminal. In the terminal window, type in the following commands, pressing enter after each line. cd Downloadssudo dpkg –i avast* This will install avast! on the live Ubuntu environment. To ensure that you can use the latest virus database, while still in the terminal window, type in the following command: sudo sysctl –w kernel.shmmax=128000000 Now we’re ready to open avast!. Click on Applications on the top-left corner of the screen, expand the Accessories folder, and click on the new avast! Antivirus item. You will first be greeted with a window that asks for your license key. Hopefully you’ve received it in your email by now; open the email that avast! sends you, copy the license key, and paste it in the Registration window. avast! Antivirus will open. You’ll notice that the virus database is outdated. Click on the Update database button and avast! will start downloading the latest virus database. To scan your Windows hard drive, you will need to “mount” it. While the virus database is downloading, click on Places on the top-left of your screen, and click on your Windows hard drive, if you can tell which one it is by its size. If you can’t tell which is the correct hard drive, then click on Computer and check out each hard drive until you find the right one. When you find it, make a note of the drive’s label, which appears in the menu bar of the file browser. Also note that your hard drive will now appear on your desktop. By now, your virus database should be updated. At the time this article was written, the most recent version was 100404-0. In the main avast! window, click on the radio button next to Selected folders and then click on the “+” button to the right of the list box. It will open up a dialog box to browse to a location. To find your Windows hard drive, click on the “>” next to the computer icon. In the expanded list, find the folder labelled “media” and click on the “>” next to it to expand it. In this list, you should be able to find the label that corresponds to your Windows hard drive. If you want to scan a certain folder, then you can go further into this hierarchy and select that folder. However, we will scan the entire hard drive, so we’ll just press OK. Click on Start scan and avast! will start scanning your hard drive. If a virus is found, you’ll be prompted to select an action. If you know that the file is a virus, then you can Delete it, but there is the possibility of false positives, so you can also choose Move to chest to quarantine it. When avast! is done scanning, it will summarize what it found on your hard drive. You can take different actions on those files at this time by right-clicking on them and selecting the appropriate action. When you’re done, click Close. Your Windows PC is now free of viruses, in the eyes of avast!. Reboot your computer and with any luck it will now boot up! Alternatives to avast! If avast! and a liberal amount of Googling doesn’t fix your problem, it’s possible that a different virus scanner will fix your obscure issue. Here are a list of other virus scanners available for Ubuntu that are either free or offer free trials. See their support forums for help on installing these virus scanners. Avira AntiVir Personal for Linux / Solaris Panda Antivirus for Linux Installation and usage guide from Ubuntu F-PROT Antivirus for Linux ClamAV installation and usage guide from Ubuntu NOD32 Antivirus for Linux Kaspersky Anti-Virus 2010 Bitdefender Antivirus for Unices Conclusion Running avast! from a Ubuntu Live CD can clean the vast majority of viruses from your Windows PC. This is another reason to always have a Ubuntu Live CD ready just in case something happens to your Windows installation! Similar Articles Productive Geek Tips Secure Computing: Windows Live OneCareHow To Remove Antivirus Live and Other Rogue/Fake Antivirus MalwareUse the Windows Key for the "Start" Menu in Ubuntu LinuxScan Files for Viruses Before You Download With Dr.WebAsk the Readers: Share Your Tips for Defeating Viruses and Malware TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC

    Read the article

  • How to remove MySQL completely with config and library files on ubuntu 12.04 gnome 3.0

    - by codeartist
    I tried everything till now: sudo apt-get remove mysql-server mysql-client mysql-common sudo apt-get purge mysql-server mysql-client mysql-common sudo apt-get autoremove and even more commands... But whenever I am trying to locate mysql. I get a no. of files related to mysql command: shell>> locate mysql Output: /etc/mysql /etc/apparmor.d/usr.sbin.mysqld /etc/apparmor.d/abstractions/mysql /etc/apparmor.d/cache/usr.sbin.mysqld /etc/apparmor.d/cache/usr.sbin.mysqld-akonadi /etc/apparmor.d/local/usr.sbin.mysqld /etc/bash_completion.d/mysqladmin /etc/init/mysql.conf /etc/logcheck/ignore.d.paranoid/mysql-server-5_5 /etc/logcheck/ignore.d.server/mysql-server-5_5 /etc/logcheck/ignore.d.workstation/mysql-server-5_5 /etc/logrotate.d/mysql-server /etc/mysql/conf.d /etc/mysql/debian-start /etc/mysql/debian.cnf /etc/mysql/conf.d/mysqld_safe_syslog.cnf /home/pkr/.mysql_history /home/pkr/.cache/software-center/piston-helper/rec.ubuntu.com,api,1.0,recommend_app,libqt4-sql-mysql,,349051c3a57da571aa832adb39177aff /home/pkr/.cache/software-center/piston-helper/rec.ubuntu.com,api,1.0,recommend_app,mysql-client,,cbf77a486cdc80547317981a33144427 /home/pkr/.cache/software-center/piston-helper/rec.ubuntu.com,api,1.0,recommend_app,mysql-client,,de8220dee4d957a9502caa79e8d2fdda /home/pkr/.cache/software-center/rnrclient/reviews.ubuntu.com,reviews,api,1.0,reviews,filter,en,any,any,any,libqt4-sql-mysql,page,1,helpful,,17fb2e657321dc51526ee8fe9928da30 /home/pkr/.cache/software-center/rnrclient/reviews.ubuntu.com,reviews,api,1.0,reviews,filter,en,any,any,any,mysql-client,page,1,helpful,,a4c1b6e8200f36ab5745c6f81f14da0a /home/pkr/.cache/software-center/rnrclient/reviews.ubuntu.com,reviews,api,1.0,reviews,filter,en,ubuntu,oneiric,any,libqt4-sql-mysql,page,1,helpful,,c54295fb82b8183350cd34f22c3547ef /home/pkr/.cache/software-center/rnrclient/reviews.ubuntu.com,reviews,api,1.0,reviews,filter,en,ubuntu,oneiric,any,mysql-client,page,1,helpful,,fcf201c1abff3f774af89173a84de2cc /home/pkr/.cache/software-center/rnrclient/reviews.ubuntu.com,reviews,api,1.0,reviews,filter,en,ubuntu,precise,any,libqt4-sql-mysql,page,1,helpful,,0cd86648584efeccfb16119012f89540 /home/pkr/.cache/software-center/rnrclient/reviews.ubuntu.com,reviews,api,1.0,reviews,filter,en,ubuntu,precise,any,mysql-client,page,1,helpful,,eb84724e9da7851ff8862a227d8bac59 /home/pkr/.local/share/akonadi/mysql.conf /home/pkr/.local/share/akonadi/db_data/mysql /home/pkr/.local/share/akonadi/db_data/mysql.err /home/pkr/.local/share/akonadi/db_data/mysql.err.old /home/pkr/.local/share/akonadi/db_data/mysql/columns_priv.MYD /home/pkr/.local/share/akonadi/db_data/mysql/columns_priv.MYI /home/pkr/.local/share/akonadi/db_data/mysql/columns_priv.frm /home/pkr/.local/share/akonadi/db_data/mysql/db.MYD /home/pkr/.local/share/akonadi/db_data/mysql/db.MYI /home/pkr/.local/share/akonadi/db_data/mysql/db.frm /home/pkr/.local/share/akonadi/db_data/mysql/event.MYD /home/pkr/.local/share/akonadi/db_data/mysql/event.MYI /home/pkr/.local/share/akonadi/db_data/mysql/event.frm /home/pkr/.local/share/akonadi/db_data/mysql/func.MYD /home/pkr/.local/share/akonadi/db_data/mysql/func.MYI /home/pkr/.local/share/akonadi/db_data/mysql/func.frm /home/pkr/.local/share/akonadi/db_data/mysql/general_log.CSM /home/pkr/.local/share/akonadi/db_data/mysql/general_log.CSV /home/pkr/.local/share/akonadi/db_data/mysql/general_log.frm /home/pkr/.local/share/akonadi/db_data/mysql/help_category.MYD /home/pkr/.local/share/akonadi/db_data/mysql/help_category.MYI /home/pkr/.local/share/akonadi/db_data/mysql/help_category.frm /home/pkr/.local/share/akonadi/db_data/mysql/help_keyword.MYD /home/pkr/.local/share/akonadi/db_data/mysql/help_keyword.MYI /home/pkr/.local/share/akonadi/db_data/mysql/help_keyword.frm /home/pkr/.local/share/akonadi/db_data/mysql/help_relation.MYD /home/pkr/.local/share/akonadi/db_data/mysql/help_relation.MYI /home/pkr/.local/share/akonadi/db_data/mysql/help_relation.frm /home/pkr/.local/share/akonadi/db_data/mysql/help_topic.MYD /home/pkr/.local/share/akonadi/db_data/mysql/help_topic.MYI /home/pkr/.local/share/akonadi/db_data/mysql/help_topic.frm /home/pkr/.local/share/akonadi/db_data/mysql/host.MYD /home/pkr/.local/share/akonadi/db_data/mysql/host.MYI /home/pkr/.local/share/akonadi/db_data/mysql/host.frm /home/pkr/.local/share/akonadi/db_data/mysql/ndb_binlog_index.MYD /home/pkr/.local/share/akonadi/db_data/mysql/ndb_binlog_index.MYI /home/pkr/.local/share/akonadi/db_data/mysql/ndb_binlog_index.frm /home/pkr/.local/share/akonadi/db_data/mysql/plugin.MYD /home/pkr/.local/share/akonadi/db_data/mysql/plugin.MYI /home/pkr/.local/share/akonadi/db_data/mysql/plugin.frm /home/pkr/.local/share/akonadi/db_data/mysql/proc.MYD /home/pkr/.local/share/akonadi/db_data/mysql/proc.MYI /home/pkr/.local/share/akonadi/db_data/mysql/proc.frm /home/pkr/.local/share/akonadi/db_data/mysql/procs_priv.MYD /home/pkr/.local/share/akonadi/db_data/mysql/procs_priv.MYI /home/pkr/.local/share/akonadi/db_data/mysql/procs_priv.frm /home/pkr/.local/share/akonadi/db_data/mysql/proxies_priv.MYD /home/pkr/.local/share/akonadi/db_data/mysql/proxies_priv.MYI /home/pkr/.local/share/akonadi/db_data/mysql/proxies_priv.frm /home/pkr/.local/share/akonadi/db_data/mysql/servers.MYD /home/pkr/.local/share/akonadi/db_data/mysql/servers.MYI /home/pkr/.local/share/akonadi/db_data/mysql/servers.frm /home/pkr/.local/share/akonadi/db_data/mysql/slow_log.CSM /home/pkr/.local/share/akonadi/db_data/mysql/slow_log.CSV /home/pkr/.local/share/akonadi/db_data/mysql/slow_log.frm /home/pkr/.local/share/akonadi/db_data/mysql/tables_priv.MYD /home/pkr/.local/share/akonadi/db_data/mysql/tables_priv.MYI /home/pkr/.local/share/akonadi/db_data/mysql/tables_priv.frm /home/pkr/.local/share/akonadi/db_data/mysql/time_zone.MYD /home/pkr/.local/share/akonadi/db_data/mysql/time_zone.MYI /home/pkr/.local/share/akonadi/db_data/mysql/time_zone.frm /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_leap_second.MYD /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_leap_second.MYI /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_leap_second.frm /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_name.MYD /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_name.MYI /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_name.frm /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_transition.MYD /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_transition.MYI /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_transition.frm /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_transition_type.MYD /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_transition_type.MYI /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_transition_type.frm /home/pkr/.local/share/akonadi/db_data/mysql/user.MYD /home/pkr/.local/share/akonadi/db_data/mysql/user.MYI /home/pkr/.local/share/akonadi/db_data/mysql/user.frm /usr/bin/mysql /usr/bin/mysql_install_db /usr/bin/mysql_upgrade /usr/bin/mysqlcheck /usr/sbin/mysqld /usr/share/mysql /usr/share/app-install/desktop/gmysqlcc:gmysqlcc.desktop /usr/share/app-install/desktop/mysql-client.desktop /usr/share/app-install/desktop/mysql-navigator:mysql-navigator.desktop /usr/share/app-install/desktop/mysql-server.desktop /usr/share/app-install/icons/gmysqlcc-32.png /usr/share/app-install/icons/mysql-navigator.png /usr/share/doc/mysql-client-core-5.5 /usr/share/doc/mysql-server-core-5.5 /usr/share/kde4/apps/katepart/syntax/sql-mysql.xml /usr/share/man/man1/mysql.1.gz /usr/share/man/man1/mysql_install_db.1.gz /usr/share/man/man1/mysql_upgrade.1.gz /usr/share/man/man1/mysqlcheck.1.gz /usr/share/man/man8/mysqld.8.gz /var/cache/apt/archives/akonadi-backend-mysql_1.7.2-0ubuntu1_all.deb /var/cache/apt/archives/libmysqlclient-dev_5.5.22-0ubuntu1_i386.deb /var/cache/apt/archives/libmysqlclient18_5.5.22-0ubuntu1_i386.deb /var/cache/apt/archives/libqt4-sql-mysql_4%3a4.8.1-0ubuntu4.1_i386.deb /var/cache/apt/archives/mysql-client-5.5_5.5.22-0ubuntu1_i386.deb /var/cache/apt/archives/mysql-client-core-5.5_5.5.22-0ubuntu1_i386.deb /var/cache/apt/archives/mysql-client_5.5.22-0ubuntu1_all.deb /var/cache/apt/archives/mysql-common_5.5.22-0ubuntu1_all.deb /var/cache/apt/archives/mysql-server-5.5_5.5.22-0ubuntu1_i386.deb /var/cache/apt/archives/mysql-server-core-5.5_5.5.22-0ubuntu1_i386.deb /var/cache/apt/archives/mysql-server_5.5.22-0ubuntu1_all.deb /var/lib/dpkg/info/mysql-client-core-5.5.list /var/lib/dpkg/info/mysql-client-core-5.5.md5sums /var/lib/dpkg/info/mysql-server-5.5.list /var/lib/dpkg/info/mysql-server-5.5.postrm /var/lib/dpkg/info/mysql-server-core-5.5.list /var/lib/dpkg/info/mysql-server-core-5.5.md5sums /var/log/mysql /var/log/mysql.err /var/log/mysql.log /var/log/mysql.log.1.gz /var/log/mysql.log.2.gz /var/log/mysql.log.3.gz /var/log/mysql.log.4.gz /var/log/mysql.log.5.gz /var/log/mysql.log.6.gz /var/log/mysql.log.7.gz /var/log/upstart/mysql.log.1.gz /var/log/upstart/mysql.log.2.gz /var/log/upstart/mysql.log.3.gz /var/log/upstart/mysql.log.4.gz /var/log/upstart/mysql.log.5.gz /var/log/upstart/mysql.log.6.gz /var/log/upstart/mysql.log.7.gz What should I do now? Please help me out in this :( I was trying to find out if there is any way I can remove mysql related every file and then reinstall mysql. I need it for Qt connectivity. I don't understand what to do! Please help :(

    Read the article

  • C#: System.Collections.Concurrent.ConcurrentQueue vs. Queue

    - by James Michael Hare
    I love new toys, so of course when .NET 4.0 came out I felt like the proverbial kid in the candy store!  Now, some people get all excited about the IDE and it’s new features or about changes to WPF and Silver Light and yes, those are all very fine and grand.  But me, I get all excited about things that tend to affect my life on the backside of development.  That’s why when I heard there were going to be concurrent container implementations in the latest version of .NET I was salivating like Pavlov’s dog at the dinner bell. They seem so simple, really, that one could easily overlook them.  Essentially they are implementations of containers (many that mirror the generic collections, others are new) that have either been optimized with very efficient, limited, or no locking but are still completely thread safe -- and I just had to see what kind of an improvement that would translate into. Since part of my job as a solutions architect here where I work is to help design, develop, and maintain the systems that process tons of requests each second, the thought of extremely efficient thread-safe containers was extremely appealing.  Of course, they also rolled out a whole parallel development framework which I won’t get into in this post but will cover bits and pieces of as time goes by. This time, I was mainly curious as to how well these new concurrent containers would perform compared to areas in our code where we manually synchronize them using lock or some other mechanism.  So I set about to run a processing test with a series of producers and consumers that would be either processing a traditional System.Collections.Generic.Queue or a System.Collection.Concurrent.ConcurrentQueue. Now, I wanted to keep the code as common as possible to make sure that the only variance was the container, so I created a test Producer and a test Consumer.  The test Producer takes an Action<string> delegate which is responsible for taking a string and placing it on whichever queue we’re testing in a thread-safe manner: 1: internal class Producer 2: { 3: public int Iterations { get; set; } 4: public Action<string> ProduceDelegate { get; set; } 5: 6: public void Produce() 7: { 8: for (int i = 0; i < Iterations; i++) 9: { 10: ProduceDelegate(“Hello”); 11: } 12: } 13: } Then likewise, I created a consumer that took a Func<string> that would read from whichever queue we’re testing and return either the string if data exists or null if not.  Then, if the item doesn’t exist, it will do a 10 ms wait before testing again.  Once all the producers are done and join the main thread, a flag will be set in each of the consumers to tell them once the queue is empty they can shut down since no other data is coming: 1: internal class Consumer 2: { 3: public Func<string> ConsumeDelegate { get; set; } 4: public bool HaltWhenEmpty { get; set; } 5: 6: public void Consume() 7: { 8: bool processing = true; 9: 10: while (processing) 11: { 12: string result = ConsumeDelegate(); 13: 14: if(result == null) 15: { 16: if (HaltWhenEmpty) 17: { 18: processing = false; 19: } 20: else 21: { 22: Thread.Sleep(TimeSpan.FromMilliseconds(10)); 23: } 24: } 25: else 26: { 27: DoWork(); // do something non-trivial so consumers lag behind a bit 28: } 29: } 30: } 31: } Okay, now that we’ve done that, we can launch threads of varying numbers using lambdas for each different method of production/consumption.  First let's look at the lambdas for a typical System.Collections.Generics.Queue with locking: 1: // lambda for putting to typical Queue with locking... 2: var productionDelegate = s => 3: { 4: lock (_mutex) 5: { 6: _mutexQueue.Enqueue(s); 7: } 8: }; 9:  10: // and lambda for typical getting from Queue with locking... 11: var consumptionDelegate = () => 12: { 13: lock (_mutex) 14: { 15: if (_mutexQueue.Count > 0) 16: { 17: return _mutexQueue.Dequeue(); 18: } 19: } 20: return null; 21: }; Nothing new or interesting here.  Just typical locks on an internal object instance.  Now let's look at using a ConcurrentQueue from the System.Collections.Concurrent library: 1: // lambda for putting to a ConcurrentQueue, notice it needs no locking! 2: var productionDelegate = s => 3: { 4: _concurrentQueue.Enqueue(s); 5: }; 6:  7: // lambda for getting from a ConcurrentQueue, once again, no locking required. 8: var consumptionDelegate = () => 9: { 10: string s; 11: return _concurrentQueue.TryDequeue(out s) ? s : null; 12: }; So I pass each of these lambdas and the number of producer and consumers threads to launch and take a look at the timing results.  Basically I’m timing from the time all threads start and begin producing/consuming to the time that all threads rejoin.  I won't bore you with the test code, basically it just launches code that creates the producers and consumers and launches them in their own threads, then waits for them all to rejoin.  The following are the timings from the start of all threads to the Join() on all threads completing.  The producers create 10,000,000 items evenly between themselves and then when all producers are done they trigger the consumers to stop once the queue is empty. These are the results in milliseconds from the ordinary Queue with locking: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 4284 5153 4226 4554.33 4: 10 10 4044 3831 5010 4295.00 5: 100 100 5497 5378 5612 5495.67 6: 1000 1000 24234 25409 27160 25601.00 And the following are the results in milliseconds from the ConcurrentQueue with no locking necessary: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 3647 3643 3718 3669.33 4: 10 10 2311 2136 2142 2196.33 5: 100 100 2480 2416 2190 2362.00 6: 1000 1000 7289 6897 7061 7082.33 Note that even though obviously 2000 threads is quite extreme, the concurrent queue actually scales really well, whereas the traditional queue with simple locking scales much more poorly. I love the new concurrent collections, they look so much simpler without littering your code with the locking logic, and they perform much better.  All in all, a great new toy to add to your arsenal of multi-threaded processing!

    Read the article

  • Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite Now Available

    - by chung.wu
    Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite is now available. The management suite combines features that were available in the standalone Application Management Pack for Oracle E-Business Suite and Application Change Management Pack for Oracle E-Business Suite with Oracle's market leading real user monitoring and configuration management capabilities to provide the most complete solution for managing E-Business Suite applications. The features that were available in the standalone management packs are now packaged into Oracle E-Business Suite Plug-in 4.0, which is now fully certified with Oracle Enterprise Manager 11g Grid Control. This latest plug-in extends Grid Control with E-Business Suite specific management capabilities and features enhanced change management support. In addition, this latest release of Application Management Suite for Oracle E-Business Suite also includes numerous real user monitoring improvements. General Enhancements This new release of Application Management Suite for Oracle E-Business Suite offers the following key capabilities: Oracle Enterprise Manager 11g Grid Control Support: All components of the management suite are certified with Oracle Enterprise Manager 11g Grid Control. Built-in Diagnostic Ability: This release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Discovery, Cloning, and User Monitoring that will validate if the appropriate patches, privileges, setups, and profile options have been configured. This feature improves the setup and configuration time to be up and operational. Lifecycle Automation Enhancements Application Management Suite for Oracle E-Business Suite provides a centralized view to monitor and orchestrate changes (both functional and technical) across multiple Oracle E-Business Suite systems. In this latest release, it provides even more control and flexibility in managing Oracle E-Business Suite changes.Change Management: Built-in Diagnostic Ability: This latest release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Customization Manager, Patch Manager, and Setup Manager that will validate if the appropriate patches, privileges, setups, and profile options have been configured. Enhancing the setup time and configuration time to be up and operational. Customization Manager: Multi-Node Custom Application Registration: This feature automates the process of registering and validating custom products/applications on every node in a multi-node EBS system. Public/Private File Source Mappings and E-Business Suite Mappings: File Source Mappings & E-Business Suite Mappings can be created and marked as public or private. Only the creator/owner can define/edit his/her own mappings. Users can use public mappings, but cannot edit or change settings. Test Checkout Command for Versions: This feature allows you to test/verify checkout commands at the version level within the File Source Mapping page. Prerequisite Patch Validation: You can specify prerequisite patches for Customization packages and for Release 12 Oracle E-Business Suite packages. Destination Path Population: You can now automatically populate the Destination Path for common file types during package construction. OAF File Type Support: Ability to package Oracle Application Framework (OAF) customizations and deploy them across multiple Oracle E-Business Suite instances. Extended PLL Support: Ability to distinguish between different types of PLLs (that is, Report and Forms PLL files). Providing better granularity when managing PLL objects. Enhanced Standard Checker: Provides greater and more comprehensive list of coding standards that are verified during the package build process (for example, File Driver exceptions, Java checks, XML checks, SQL checks, etc.) HTML Package Readme: The package Readme is in HTML format and includes the file listing. Advanced Package Search Capabilities: The ability to utilize more criteria within the advanced search package (that is, Public, Last Updated by, Files Source Mapping, and E-Business Suite Mapping). Enhanced Package Build Notifications: More detailed information on the results of a package build process. Better, more detailed troubleshooting guidance in the event of build failures. Patch Manager:Staged Patches: Ability to run Patch Manager with no external internet access. Customer can download Oracle E-Business Suite patches into a shared location for Patch Manager to access and apply. Supports highly secured production environments that prohibit external internet connections. Support for Superseded Patches: Automatic check for superseded patches. Allows users to easily add superseded patches into the Patch Run. More comprehensive and correct Patch Runs. Removes many manual and laborious tasks, frees up Apps DBAs for higher value-added tasks. Automatic Primary Node Identification: Users can now specify which is the "primary node" (that is, which node hosts the Shared APPL_TOP) during the Patch Run interview process, available for Release 12 only. Setup Manager:Preview Extract Results: Ability to execute an extract in "proof mode", and examine the query results, to determine accuracy. Used in conjunction with the "where" clause in Advanced Filtering. This feature can provide better and more accurate fine tuning of extracts. Use Uploaded Extracts in New Projects: Ability to incorporate uploaded extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access extracts that have been uploaded. Allows customer to reuse uploaded extracts to provision new instances. Re-use Existing (that is, historical) Extracts in New Projects: Ability to incorporate existing extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access point-in-time extracts (snapshots) of configuration data. Allows customer to reuse existing extracts to provision new instances. Allows comparative historical reporting of identical APIs, executed at different times. Support for BR100 formats: Setup Manager can now automatically produce reports in the BR100 format. Native support for industry standard formats. Concurrent Manager API Support: General Foundation now provides an API for management of "Concurrent Manager" configuration data. Ability to migrate Concurrent Managers from one instance to another. Complete the setup once and never again; no need to redefine the Concurrent Managers. User Experience Management Enhancements Application Management Suite for Oracle E-Business Suite includes comprehensive capabilities for user experience management, supporting both real user and synthetic transaction based user monitoring techniques. This latest release of the management suite include numerous improvements in real user monitoring support. KPI Reporting: Configurable decimal precision for reporting of KPI and SLA values. By default, this is two decimal places. KPI numerator and denominator information. It is now possible to view KPI numerator and denominator information, and to have it available for export. Content Messages Processing: The application content message facility has been extended to distinguish between notifications and errors. In addition, it is now possible to specify matching rules that can be used to refine a selected content message specification. Note this is only available for XPath-based (not literal) message contents. Data Export: The Enriched data export facility has been significantly enhanced to provide improved performance and accessibility. Data is no longer stored within XML-based files, but is now stored within the Reporter database. However, it is possible to configure an alternative database for its storage. Access to the export data is through SQL. With this enhancement, it is now more easy than ever to use tools such as Oracle Business Intelligence Enterprise Edition to analyze correlated data collected from real user monitoring and business data sources. SNMP Traps for System Events: Previously, the SNMP notification facility was only available for KPI alerting. It has now been extended to support the generation of SNMP traps for system events, to provide external health monitoring of the RUEI system processes. Performance Improvements: Enhanced dashboard performance. The dashboard facility has been enhanced to support the parallel loading of items. In the case of dashboards containing large numbers of items, this can result in a significant performance improvement. Initial period selection within Data Browser and reports. The User Preferences facility has been extended to allow you to specify the initial period selection when first entering the Data Browser or reports facility. The default is the last hour. Performance improvement when querying the all sessions group. Technical Prerequisites, Download and Installation Instructions The Linux version of the plug-in is available for immediate download from Oracle Technology Network or Oracle eDelivery. For specific information regarding technical prerequisites, product download and installation, please refer to My Oracle Support note 1224313.1. The following certifications are in progress: * Oracle Solaris on SPARC (64-bit) (9, 10) * HP-UX Itanium (11.23, 11.31) * HP-UX PA-RISC (64-bit) (11.23, 11.31) * IBM AIX on Power Systems (64-bit) (5.3, 6.1)

    Read the article

  • How to Reduce the Size of Your WinSXS Folder on Windows 7 or 8

    - by Chris Hoffman
    The WinSXS folder at C:\Windows\WinSXS is massive and continues to grow the longer you have Windows installed. This folder builds up unnecessary files over time, such as old versions of system components. This folder also contains files for uninstalled, disabled Windows components. Even if you don’t have a Windows component installed, it will be present in your WinSXS folder, taking up space. Why the WinSXS Folder Gets to Big The WinSXS folder contains all Windows system components. In fact, component files elsewhere in Windows are just links to files contained in the WinSXS folder. The WinSXS folder contains every operating system file. When Windows installs updates, it drops the new Windows component in the WinSXS folder and keeps the old component in the WinSXS folder. This means that every Windows Update you install increases the size of your WinSXS folder. This allows you to uninstall operating system updates from the Control Panel, which can be useful in the case of a buggy update — but it’s a feature that’s rarely used. Windows 7 dealt with this by including a feature that allows Windows to clean up old Windows update files after you install a new Windows service pack. The idea was that the system could be cleaned up regularly along with service packs. However, Windows 7 only saw one service pack — Service Pack 1 — released in 2010. Microsoft has no intention of launching another. This means that, for more than three years, Windows update uninstallation files have been building up on Windows 7 systems and couldn’t be easily removed. Clean Up Update Files To fix this problem, Microsoft recently backported a feature from Windows 8 to Windows 7. They did this without much fanfare — it was rolled out in a typical minor operating system update, the kind that don’t generally add new features. To clean up such update files, open the Disk Cleanup wizard (tap the Windows key, type “disk cleanup” into the Start menu, and press Enter). Click the Clean up System Files button, enable the Windows Update Cleanup option and click OK. If you’ve been using your Windows 7 system for a few years, you’ll likely be able to free several gigabytes of space. The next time you reboot after doing this, Windows will take a few minutes to clean up system files before you can log in and use your desktop. If you don’t see this feature in the Disk Cleanup window, you’re likely behind on your updates — install the latest updates from Windows Update. Windows 8 and 8.1 include built-in features that do this automatically. In fact, there’s a StartComponentCleanup scheduled task included with Windows that will automatically run in the background, cleaning up components 30 days after you’ve installed them. This 30-day period gives you time to uninstall an update if it causes problems. If you’d like to manually clean up updates, you can also use the Windows Update Cleanup option in the Disk Usage window, just as you can on Windows 7. (To open it, tap the Windows key, type “disk cleanup” to perform a search, and click the “Free up disk space by removing unnecessary files” shortcut that appears.) Windows 8.1 gives you more options, allowing you to forcibly remove all previous versions of uninstalled components, even ones that haven’t been around for more than 30 days. These commands must be run in an elevated Command Prompt — in other words, start the Command Prompt window as Administrator. For example, the following command will uninstall all previous versions of components without the scheduled task’s 30-day grace period: DISM.exe /online /Cleanup-Image /StartComponentCleanup The following command will remove files needed for uninstallation of service packs. You won’t be able to uninstall any currently installed service packs after running this command: DISM.exe /online /Cleanup-Image /SPSuperseded The following command will remove all old versions of every component. You won’t be able to uninstall any currently installed service packs or updates after this completes: DISM.exe /online /Cleanup-Image /StartComponentCleanup /ResetBase Remove Features on Demand Modern versions of Windows allow you to enable or disable Windows features on demand. You’ll find a list of these features in the Windows Features window you can access from the Control Panel. Even features you don’t have installed — that is, the features you see unchecked in this window — are stored on your hard drive in your WinSXS folder. If you choose to install them, they’ll be made available from your WinSXS folder. This means you won’t have to download anything or provide Windows installation media to install these features. However, these features take up space. While this shouldn’t matter on typical computers, users with extremely low amounts of storage or Windows server administrators who want to slim their Windows installs down to the smallest possible set of system files may want to get these files off their hard drives. For this reason, Windows 8 added a new option that allows you to remove these uninstalled components from the WinSXS folder entirely, freeing up space. If you choose to install the removed components later, Windows will prompt you to download the component files from Microsoft. To do this, open a Command Prompt window as Administrator. Use the following command to see the features available to you: DISM.exe /Online /English /Get-Features /Format:Table You’ll see a table of feature names and their states. To remove a feature from your system, you’d use the following command, replacing NAME with the name of the feature you want to remove. You can get the feature name you need from the table above. DISM.exe /Online /Disable-Feature /featurename:NAME /Remove If you run the /GetFeatures command again, you’ll now see that the feature has a status of “Disabled with Payload Removed” instead of just “Disabled.” That’s how you know it’s not taking up space on your computer’s hard drive. If you’re trying to slim down a Windows system as much as possible, be sure to check out our lists of ways to free up disk space on Windows and reduce the space used by system files.     

    Read the article

< Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >