Search Results

Search found 21501 results on 861 pages for 'slow connection'.

Page 413/861 | < Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >

  • Rebinding and singleton-behaviour [NInject]

    - by Maximilian Csuk
    Hi! I have set up a NInject (using version 1.5) binding like this: Bind<ISessionFactory>().ToMethod<ISessionFactory>(ctx => { try { // create session factory, might fail because of database issues like wrong connection string } catch (Exception e) { throw new DatabaseException(e); } }).Using<SingletonBehavior>(); As you can see, this binding uses a singleton behavior but can also throw exception when something is not configured correctly, like a wrong connection string to the database. Now, when the creation of a session factory fails at first (throwing a database exception), NInject doesn't try to create the object again but always returns null. I would need NInject to check for null first and recreate when the instance is null, but of course not when there already is an instance successfully constructed (keeping it singleton). Like this: var a = Kernel.Get<ISessionFactory>(); // might fail, a = null // ... change some database settings var b = Kernel.Get<ISessionFactory>(); // might not fail anymore, b = ISessionFactory object Would I need to write a custom behavior or am I missing something else? Thanks for your answers!

    Read the article

  • How to reduce Entity Framework 4 query compile time?

    - by Rup
    Summary: We're having problems with EF4 query compilation times of 12+ seconds. Cached queries will only get us so far; are there any ways we can actually reduce the compilation time? Is there anything we might be doing wrong we can look for? Thanks! We have an EF4 model which is exposed over the WCF services. For each of our entity types we expose a method to fetch and return the whole entity for display / edit including a number of referenced child objects. For one particular entity we have to .Include() 31 tables / sub-tables to return all relevant data. Unfortunately this makes the EF query compilation prohibitively slow: it takes 12-15 seconds to compile and builds a 7,800-line, 300K query. This is the back-end of a web UI which will need to be snappier than that. Is there anything we can do to improve this? We can CompiledQuery.Compile this - that doesn't do any work until first use and so helps the second and subsequent executions but our customer is nervous that the first usage shouldn't be slow either. Similarly if the IIS app pool hosting the web service gets recycled we'll lose the cached plan, although we can increase lifetimes to minimise this. Also I can't see a way to precompile this ahead of time and / or to serialise out the EF compiled query cache (short of reflection tricks). The CompiledQuery object only contains a GUID reference into the cache so it's the cache we really care about. (Writing this out it occurs to me I can kick off something in the background from app_startup to execute all queries to get them compiled - is that safe?) However even if we do solve that problem, we build up our search queries dynamically with LINQ-to-Entities clauses based on which parameters we're searching on: I don't think the SQL generator does a good enough job that we can move all that logic into the SQL layer so I don't think we can pre-compile our search queries. This is less serious because the search data results use fewer tables and so it's only 3-4 seconds compile not 12-15 but the customer thinks that still won't really be acceptable to end-users. So we really need to reduce the query compilation time somehow. Any ideas? Profiling points to ELinqQueryState.GetExecutionPlan as the place to start and I have attempted to step into that but without the real .NET 4 source available I couldn't get very far, and the source generated by Reflector won't let me step into some functions or set breakpoints in them. The project was upgraded from .NET 3.5 so I have tried regenerating the EDMX from scratch in EF4 in case there was something wrong with it but that didn't help. I have tried the EFProf utility advertised here but it doesn't look like it would help with this. My large query crashes its data collector anyway. I have run the generated query through SQL performance tuning and it already has 100% index usage. I can't see anything wrong with the database that would cause the query generator problems. Is there something O(n^2) in the execution plan compiler - is breaking this down into blocks of separate data loads rather than all 32 tables at once likely to help? Setting EF to lazy-load didn't help. I've bought the pre-release O'Reilly Julie Lerman EF4 book but I can't find anything in there to help beyond 'compile your queries'. I don't understand why it's taking 12-15 seconds to generate a single select across 32 tables so I'm optimistic there's some scope for improvement! Thanks for any suggestions! We're running against SQL Server 2008 in case that matters and XP / 7 / server 2008 R2 using RTM VS2010.

    Read the article

  • How to avoid Memory "Hard Fault/sec"

    - by Flavio Oliveira
    i've a problem on my windows 2008 server x64, and i cannot understand how can i solve it. i'm looking to Resource Monitor and see about 100 to 200 hard faults/sec. and generally the machine is slow. As i've readed a bit it is caused by a "memory Page" that is no longer available on physical memory and causes a io operations (disk) and it is a problem. The current hardware is a intel core2duo E8400 (3.0GHz) with 6GB RAM on a Windows Server Web 64-bit. Actually the machine have about 2GB Ram used what having 4Gb available to use, Why is the machine requires that high level of Disk operations? what can i do to increase the performance? Im experiencing a memory issues? what should be my starting point?

    Read the article

  • Shaping outbound Traffic to Control Download Speeds with Linux

    - by Kyle Brandt
    I have a situation where a server makes lots of requests from big webservers all at the same time. Currently, I have not control over the amount of requests or the rate of the requests from the application that does this. The responses from these webservers is more than the internet line can handle. (Basically, we are launching a DoS on ourselves). I am going to get push to get this fixed at the application level, but for the time being, is there anyway I can use traffic shaping on the Linux server to control this? I know I can only shape outbound traffic, but maybe there is a way I can slow the TCP responses so the other side will detect congestion and this will help my situation? If there is anything like this with tc, what might the configuration look like? The idea is that the traffic control might help me control which packets get dropped before they reach my router.

    Read the article

  • Simple Workstation Imaging Solution?

    - by Will
    Hey guys, I need a fairly cheap imaging solution for Windows XP corporate desktops. Ideally, I'd be able to set up a desktop exactly as we want it, create an image, deploy this image to a server, then boot a new desktop to a CD/USB Drive/Network and quickly set up the workstation. Ideally, each computer would also have a unique workstation name. Any ideas? Right now I'm using a custom built Linux DD solution, but it's slow, not network-based, can't image multiple computers at the same time as there's only one copy on a USB drive, and can't uniquely name the computers. Thanks, Will

    Read the article

  • AMQP gem specifying a dead letter exchange

    - by JP.
    I've specified a queue on the RabbitMQ server called MyQueue. It is durable and has x-dead-letter-exchange set to MyQueue.DLX. (I also have an exchange called MyExchange bound to that queue, and another exchange called MyQueue.DLX, but I don't believe this is important to the question) If I use ruby's amqp gem to subscribe to those messages I would do it like this: # Doing this before and in a new thread has to do with how my code is structured # shown here in case it has a bearing on the question Thread.new do AMQP.start('amqp://guest:[email protected]:5672') end EventMachine.next_tick do channel = AMQP::Channel.new(AMQP.connection) queue = channel.queue("MyQueue", :durable => true, :'x-dead-letter-exchange' => "MyQueue.DLX") queue.subscribe(:ack => true) do |metadata, payload| p metadata p payload end end If I execute this code with the queues and exchanges already created and bound (as they need to be in my set up) then RabbitMQ throws the following error in its logs: =ERROR REPORT==== 19-Aug-2013::14:25:53 === connection <0.19654.2>, channel 2 - soft error: {amqp_error,precondition_failed, "inequivalent arg 'x-dead-letter-exchange'for queue 'MyQueue' in vhost '/': received none but current is the value 'MyQueue.DLX' of type 'longstr'", 'queue.declare'} Which seems to be saying that I haven't specified the same Dead Letter Exchange as the pre-existing queue - but I believe I have with the queue = ... line. Any ideas?

    Read the article

  • Disk Response Time in Windows 7 Resource Monitor?

    - by Keith Nicholas
    In the resource monitor I'am looking at the disk response time. There are a lot of processes where the response time is thousands of milliseconds consistently, I'm pretty sure this is the source of my computer slowing down. I'm not sure what normal response times are though? I'm running win 7 64bit ultimate. This is running on a new computer, i5 with a terabyte drive, 4gigs of ram, etc, disk is still pretty much empty, so it should all be pretty snappy. And if it is going really slow, how do I track down whats causing it? I've turned off things like real time virus protection as experiments to see if there is something weird there, but makes no real difference (other than it doesn't contribute to the problem by accessing the disk)

    Read the article

  • Writing more efficient xquery code (avoiding redundant iteration)

    - by Coquelicot
    Here's a simplified version of a problem I'm working on: I have a bunch of xml data that encodes information about people. Each person is uniquely identified by an 'id' attribute, but they may go by many names. For example, in one document, I might find <person id=1>Paul Mcartney</person> <person id=2>Ringo Starr</person> And in another I might find: <person id=1>Sir Paul McCartney</person> <person id=2>Richard Starkey</person> I want to use xquery to produce a new document that lists every name associated with a given id. i.e.: <person id=1> <name>Paul McCartney</name> <name>Sir Paul McCartney</name> <name>James Paul McCartney</name> </person> <person id=2> ... </person> The way I'm doing this now in xquery is something like this (pseudocode-esque): let $ids := distinct-terms( [all the id attributes on people] ) for $id in $ids return <person id={$id}> { for $unique-name in distinct-values ( for $name in ( [all names] ) where $name/@id=$id return $name ) return <name>{$unique-name}</name> } </person> The problem is that this is really slow. I imagine the bottleneck is the innermost loop, which executes once for every id (of which there are about 1200). I'm dealing with a fair bit of data (300 MB, spread over about 800 xml files), so even a single execution of the query in the inner loop takes about 12 seconds, which means that repeating it 1200 times will take about 4 hours (which might be optimistic - the process has been running for 3 hours so far). Not only is it slow, it's using a whole lot of virtual memory. I'm using Saxon, and I had to set java's maximum heap size to 10 GB (!) to avoid getting out of memory errors, and it's currently using 6 GB of physical memory. So here's how I'd really like to do this (in Pythonic pseudocode): persons = {} for id in ids: person[id] = set() for person in all_the_people_in_my_xml_document: persons[person.id].add(person.name) There, I just did it in linear time, with only one sweep of the xml document. Now, is there some way to do something similar in xquery? Surely if I can imagine it, a reasonable programming language should be able to do it (he said quixotically). The problem, I suppose, is that unlike Python, xquery doesn't (as far as I know) have anything like an associative array. Is there some clever way around this? Failing that, is there something better than xquery that I might use to accomplish my goal? Because really, the computational resources I'm throwing at this relatively simple problem are kind of ridiculous.

    Read the article

  • -[NSCFData writeStreamHandleEvent:]: unrecognized selector sent to instance in a stream callback

    - by user295491
    Hi everyone, I am working with streams and sockets in iPhone SDK 3.1.3 the issue is when the program accept a callback and I want to handle this writestream callback the following error is triggered " Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: ' -[NSCFData writeStreamHandleEvent:]: unrecognized selector sent to instance 0x17bc70'" But I don't know how to solve it because everything seems fine. Even when I run the debugger there is no error the program works. Any hint here will help! The code of the callback is: void myWriteStreamCallBack (CFWriteStreamRef stream, CFStreamEventType eventType, void *info){ NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; Connection *handlerEv = [(Connection *)info retain] autorelease]; [handlerEv writeStreamHandleEvent:eventType]; [pool release]; } The code of the writeStreamHandleEvent: - (void)writeStreamHandleEvent:(CFStreamEventType) eventType{ switch(eventType) { case kCFStreamEventOpenCompleted: writeStreamOpen = YES; break; case kCFStreamEventCanAcceptBytes: NSLog(@"Writing in the stream"); [self writeOutgoingBufferToStream]; break; case kCFStreamEventErrorOccurred: error = CFWriteStreamGetError(writeStream); fprintf(stderr, "CFReadStreamGetError returned (%ld, %ld)\n", error.domain, error.error); CFWriteStreamUnscheduleFromRunLoop(writeStream, CFRunLoopGetCurrent(),kCFRunLoopCommonModes); CFWriteStreamClose(writeStream); CFRelease(writeStream); break; case kCFStreamEventEndEncountered: CFWriteStreamUnscheduleFromRunLoop(writeStream, CFRunLoopGetCurrent(),kCFRunLoopCommonModes); CFWriteStreamClose(writeStream); CFRelease(writeStream); break; } } The code of the stream configuration: CFSocketContext ctx = {0, self, nil, nil, nil}; CFWriteStreamSetClient (writeStream,registeredEvents, (CFWriteStreamClientCallBack)&myWriteStreamCallBack,(CFStreamClientContext *)(&ctx) ); CFWriteStreamScheduleWithRunLoop (writeStream, CFRunLoopGetCurrent(), kCFRunLoopDefaultMode); You can see that there is nothing strange!, well at least I don't see it. Thank you in advance.

    Read the article

  • Handling dependencies with IoC that change within a single function call

    - by Jess
    We are trying to figure out how to setup Dependency Injection for situations where service classes can have different dependencies based on how they are used. In our specific case, we have a web app where 95% of the time the connection string is the same for the entire Request (this is a web application), but sometimes it can change. For example, we might have 2 classes with the following dependencies (simplified version - service actually has 4 dependencies): public LoginService (IUserRepository userRep) { } public UserRepository (IContext dbContext) { } In our IoC container, most of our dependencies are auto-wired except the Context for which I have something like this (not actual code, it's from memory ... this is StructureMap): x.ForRequestedType().Use() .WithCtorArg("connectionString").EqualTo(Session["ConnString"]); For 95% of our web application, this works perfectly. However, we have some admin-type functions that must operate across thousands of databases (one per client). Basically, we'd want to do this: public CreateUserList(IList<string> connStrings) { foreach (connString in connStrings) { //first create dependency graph using new connection string ???? //then call service method on new database _loginService.GetReportDataForAllUsers(); } } My question is: How do we create that new dependency graph for each time through the loop, while maintaining something that can easily be tested?

    Read the article

  • renaming hard drives (sdc to sdb) on the fly

    - by w00t
    ata2: link is slow to respond, please be patient (ready=0) kernel: [2761026.198796] ata2: soft resetting link kernel: [2761031.226669] ata2.00: disabled kernel: [2761031.226720] ata2: EH complete kernel: [2761031.226753] sd 1:0:0:0: [sdb] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK,SUGGEST_OK After receiving the error above, I couldn't access /dev/sdb anymore. Not wanting to restart the server, I rescanned for the device using echo "- - -" > /sys/class/scsi_host/host1/scan and it readded the drive as /dev/sdc. From what I have found, I need to use echo "scsi add-single-device 0 0 3 0" > /proc/scsi/scsi, "3" being the SCSI ID which corresponds to sdb. Everything nice up to the point I execute the command and get -bash: echo: write error: Invalid argument. All the solutions point to using this method, but I am unable to. Any other method available? Debian 5.0.8 - 2.6.26-1-686

    Read the article

  • fill dropdown list by querystring

    - by KareemSaad
    I had drop down list and I want to fill it with data by specific condition i used this code but it was,t worked well <cs> protected void Page_Load(object sender, EventArgs e) { Page.Title = "ThumbnailViewPage"; if (Request.QueryString["Category_Id"] != null) { using (SqlConnection Con = Connection.GetConnection()) { SqlCommand Com = new SqlCommand("GetProducFamilyTP2", Con); Com.CommandType = CommandType.StoredProcedure; Com.Parameters.Add(Parameter.NewInt("@Category_Id", Request.QueryString["Category_Id"])); SqlDataReader DR = Com.ExecuteReader(); if (DR.Read()) { DDlProductFamily.DataTextField = DR["Name"].ToString(); DDlProductFamily.DataValueField = DR["ProductCategory_Id"].ToString(); } } } else if (Request.QueryString["ProductCategory_Id"] != null) { using (SqlConnection Con = Connection.GetConnection()) { SqlCommand Com = new SqlCommand("GetProducFamilyTP3", Con); Com.CommandType = CommandType.StoredProcedure; Com.Parameters.Add(Parameter.NewInt("@ProductCategory_Id", Request.QueryString["ProductCategory_Id"])); SqlDataReader DR = Com.ExecuteReader(); if (DR.Read()) { DDlProductFamily.DataTextField = DR["Name"].ToString(); DDlProductFamily.DataValueField = DR["ProductCategory_Id"].ToString(); } } } } <asp:UpdatePanel ID="UpdatePanel3" runat="server"> <ContentTemplate> <asp:DropDownList ID="DDlProductFamily" runat="server" ondatabound="DDlProductFamily_DataBound" onload="DDlProductFamily_Load" onselectedindexchanged="DDlProductFamily_SelectedIndexChanged"> </asp:DropDownList> </ContentTemplate> <Triggers> <asp:AsyncPostBackTrigger ControlID="DDlProductFamily" EventName="SelectedIndexChanged" /> </Triggers> </asp:UpdatePanel>

    Read the article

  • nginx+php-fpm help optimize configs

    - by Dmitro
    I have 3 servers. First server (CPU - model name: 06/17, 2.66GHz, 4 cores, 8GB RAM) have nginx as load balancer with next config upstream lb_mydomain { server mydomain.ru:81 weight=2; server 66.0.0.18 weight=6; } server { listen 80; server_name ~(?!mydomain.ru)(.*); client_max_body_size 20m; location / { proxy_pass http://lb_mydomain; proxy_redirect off; proxy_set_header Connection close; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass_header Set-Cookie; proxy_pass_header P3P; proxy_pass_header Content-Type; proxy_pass_header Content-Disposition; proxy_pass_header Content-Length; } } And configs from nginx.conf: user www-data; worker_processes 5; # worker_priority -1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 5024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; default_type application/octet-stream; #tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; # PHP-FPM (backend) upstream php-fpm { server 127.0.0.1:9000; } include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } And config php-fpm: listen = 127.0.0.1:9000 ;listen.backlog = -1 ;listen.allowed_clients = 127.0.0.1 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 user = www-data group = www-data pm = dynamic pm.max_children = 80 ;pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 ;pm.max_requests = 500 pm.status_path = /status ping.path = /ping ;ping.response = pong request_terminate_timeout = 30s request_slowlog_timeout = 10s slowlog = /var/log/php-fpm.log.slow ;rlimit_files = 1024 ;rlimit_core = 0 ;chroot = chdir = /var/www ;catch_workers_output = yes ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f [email protected] ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M In top I see 20 php-fpm processes which use from 1% - 15% CPU. So it's have high load averadge: top - 15:36:22 up 34 days, 20:54, 1 user, load average: 5.98, 7.75, 8.78 Tasks: 218 total, 1 running, 217 sleeping, 0 stopped, 0 zombie Cpu(s): 34.1%us, 3.2%sy, 0.0%ni, 37.0%id, 24.8%wa, 0.0%hi, 0.9%si, 0.0%st Mem: 8183228k total, 7538584k used, 644644k free, 351136k buffers Swap: 9936892k total, 14636k used, 9922256k free, 990540k cached Second server(CPU - model name: Intel(R) Xeon(R) CPU E5504 @ 2.00GHz, 8 cores, 8GB RAM). Nginx configs from nginx.conf: user www-data; worker_processes 5; # worker_priority -1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 5024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; default_type application/octet-stream; #tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; # PHP-FPM (backend) upstream php-fpm { server 127.0.0.1:9000; } include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } And config of php-fpm: listen = 127.0.0.1:9000 ;listen.backlog = -1 ;listen.allowed_clients = 127.0.0.1 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 user = www-data group = www-data pm = dynamic pm.max_children = 50 ;pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 ;pm.max_requests = 500 ;pm.status_path = /status ;ping.path = /ping ;ping.response = pong ;request_terminate_timeout = 0 ;request_slowlog_timeout = 0 ;slowlog = /var/log/php-fpm.log.slow ;rlimit_files = 1024 ;rlimit_core = 0 ;chroot = chdir = /var/www ;catch_workers_output = yes ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f [email protected] ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M In top I see 50 php-fpm processes which use from 10% - 25% CPU. So it's have high load averadge: top - 15:53:05 up 33 days, 1:15, 1 user, load average: 41.35, 40.28, 39.61 Tasks: 239 total, 40 running, 199 sleeping, 0 stopped, 0 zombie Cpu(s): 96.5%us, 3.1%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.4%si, 0.0%st Mem: 8185560k total, 7804224k used, 381336k free, 161648k buffers Swap: 19802108k total, 16k used, 19802092k free, 5068112k cached Third server is server with database postgresql. Also i try ab -n 50 -c 5 http://www.mydomain.ru/ And I get next info: Complete requests: 50 Failed requests: 48 (Connect: 0, Receive: 0, Length: 48, Exceptions: 0) Write errors: 0 Total transferred: 9271367 bytes HTML transferred: 9247767 bytes Requests per second: 1.02 [#/sec] (mean) Time per request: 4882.427 [ms] (mean) Time per request: 976.486 [ms] (mean, across all concurrent requests) Transfer rate: 185.44 [Kbytes/sec] received Please advise how can I make lower level of load average?

    Read the article

  • Problems with show hide jquery

    - by Michael
    I am trying to get show hide to work on multiple objects but I am unable to get it to work. Any help would be nice an much appreciated... I am lost on how to do this. If I only do one show hide it works fine but more than one it does not work properly. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Show Hide Sample</title> <script src="js/jquery.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function(){ $('#content1').hide(); $('a').click(function(){ $('#content1').show('slow'); }); $('a#close').click(function(){ $('#content1').hide('slow'); }) }); </script> <style> body{font-size:12px; font-family:"Trebuchet MS"; background: #CCF} #content1{ border:1px solid #DDDDDD; padding:10px; margin-top:5px; width:300px; } </style> </head> <body> <a href="#" id="click">Test 1</a> <div class="box"> <div id="content1"> <h1 align="center">Hide 1</h1> <P> Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Nullam pulvinar, enim ac hendrerit mattis, lorem mauris vestibulum tellus, nec porttitor diam nunc tempor dui. Aenean orci. Sed tempor diam eget tortor. Maecenas quis lorem. Nullam semper. Fusce adipiscing tellus non enim volutpat malesuada. Cras urna. Vivamus massa metus, tempus et, fermentum et, aliquet accumsan, lectus. Maecenas iaculis elit eget ipsum cursus lacinia. Mauris pulvinar.</p> <p><a href="#" id="close">Close</a></p> </div> </div> <a href="#" id="click">Test 2</a> <div class="box"> <div id="content1"> <h1 align="center">Hide 2</h1> <p> Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Nullam pulvinar, enim ac hendrerit mattis, lorem mauris vestibulum tellus, nec porttitor diam nunc tempor dui. Aenean orci. Sed tempor diam eget tortor. Maecenas quis lorem. Nullam semper. Fusce adipiscing tellus non enim volutpat malesuada. Cras urna. Vivamus massa metus, tempus et, fermentum et, aliquet accumsan, lectus. Maecenas iaculis elit eget ipsum cursus lacinia. Mauris pulvinar.</p> <p><a href="#" id="close">Close</a></p> </div> </div> </body> </html>

    Read the article

  • How can I run a BitchX when I don't have ssh or shell access?

    - by Christopher
    I just installed an IRC bot, B****X (Don't ask, I don't know - the real name is not censored). I did all of the configuration and chmod'ed the pl files to 755, but running it won't work. My host does not allow SSH/Shell (which is how the documentation says to runs he script), but just going to the URL usually works because of this. However, I get a 500 (Internal Server Error) error. I have logged errors: Possible unintended interpolation of @moz in string at ./bitch.conf line 78 (#1) (W ambiguous) You said something like `@foo' in a double-quoted string but there was no array @foo in scope at the time. If you wanted a literal @foo, then write it as \@foo; otherwise find out what happened to the array you apparently lost track of. [Fri Mar 19 16:31:43 2010] bitch.pl: Possible unintended interpolation of @moz in string at ./bitch.conf line 78. Uncaught exception from user code: [Fri Mar 19 16:31:46 2010] bitch.pl: [[31mFAILED[0m] (connect error: Connection refused) at /usr/local/lib/perl5/5.8.8/CGI/Carp.pm line 354 CGI::Carp::realdie('[Fri Mar 19 16:31:46 2010] bitch.pl: [\x{1b}[31mFAILED\x{1b}[0m] (conne...') called at /usr/local/lib/perl5/5.8.8/CGI/Carp.pm line 446 CGI::Carp::die('[\x{1b}[31mFAILED\x{1b}[0m] (connect error: Connection refused)\x{a}') called at bitch.pl line 555 Thanks in advance

    Read the article

  • How to determine the used size of device associated's buffer

    - by dubbaluga
    Hi, when mounting a device without the "sync" option, e. g. by invoking the following: mount -o async /dev/sdc1 /mnt a buffer is associated with a device to optimize (speed) read/write operations. Is there a way to determine the size of this buffer? Another question that comes into my mind is, if it's possible to find out how much of it is used currently. This can be interesting to determine the time it would take to "sync" or "umount" slow devices, such as flash-based media. Thanks in advance for your answers, Rainer

    Read the article

  • create new db in mysql with php syntax

    - by Jacksta
    I am trying to create a new db called testDB2, below is my code. Once running the script all I am getting an error on line 7 Fatal error: Call to undefined function mysqlquery() in /home/admin/domains/domain.com.au/public_html/db_createdb.php on line 7 This is my code <? $sql = "CREATE database testDB2"; $connection = mysql_connect("localhost", "admin_user", "pass") or die(mysql_error()); $result = mysqlquery($sql, $connection) or die(mysql_error()); if ($result) { $msg = "<p>Databse has been created!</p>"; } ?> <HTML> <head> <title>Create MySQL database</title> </head> <body> <? echo "$msg"; ?> </body> </HTML>

    Read the article

  • How does one convert from a Java resultset to ColdFusion query in Railo?

    - by Shawn Grigson
    The following works fine in CFMX 7 and CF8, and I'd assume CF9 as well: <!--- 'conn' is a JDBC connection ---> <cfset stat = conn.createStatement() /> <cfset rs = stat.executeQuery(trim(arguments.sql)) /> <!--- convert this Java resultset to a CF query recordset ---> <cfset queryTable = CreateObject("java", "coldfusion.sql.QueryTable")> <cfset queryTable.init(rs) > <cfset query = queryTable.FirstTable() /> This creates a statement using a JDBC driver, executes a query against it, putting it into a java resultset, and then coldfusion.sql.QueryTable is instantiated, passed the Java resulset object, and then queryTable.FirstTable() is called, which returns an actual coldfusion resultset (for cfloop and the like). The problem comes with a difference in Railo's implementation. Running this code in Railo returns the following error: No matching Constructor for coldfusion.sql.QueryTable(org.sqlite.RS) found. I've dumped the Railo java object, and don't see init() among the methods. Am I missing something simple? I'd love to get this working in Railo as well. Please note: I am doing a DSN-less connection to a SQLite db. I understand how to set up a CF datasource. My only hiccup at this point is doing the translation from a Java result set to a Railo query.

    Read the article

  • How to do simple multitasked loop processing over filenames with PowerShell?

    - by Ville Koskinen
    I'm batch transcoding some 50 GB of video files on a USB hard disk which is connected to a wlan router. The drive is mapped as a network drive on my Windows 7 laptop. The speed handicap of the wlan causes some parts of the processing to become unnecessarily slow, so I would like to do the following with PowerShell: List the names of the files on the network drive to be transcoded Copy the first file to a temporary folder on my laptop Simultaneously Transcode the file in the folder Begin copying the next file from the network drive to the temporary folder After transcoding and copy have both ended, Delete the file which has been transcoded from the temporary folder Begin transcoding next file in the temporary folder Loop until all files have been processed How would I be able to do this with PowerShell? The multitasking part is an obstacle for my skill/persistence combination.

    Read the article

  • how to use "tab space" while writing in text file

    - by Manu
    SimpleDateFormat formatter = new SimpleDateFormat("ddMMyyyy_HHmmSS"); String strCurrDate = formatter.format(new java.util.Date()); String strfileNm = "Cust_Advice_" + strCurrDate + ".txt"; String strFileGenLoc = strFileLocation + "/" + strfileNm; String strQuery="select name, age, data from basetable; try { stmt = conn.createStatement(); System.out.println("Query is -> " + strQuery); rs = stmt.executeQuery(strQuery); File f = new File(strFileGenLoc); OutputStream os = (OutputStream)new FileOutputStream(f); String encoding = "UTF8"; OutputStreamWriter osw = new OutputStreamWriter(os, encoding); BufferedWriter bw = new BufferedWriter(osw); while (rs.next() ) { bw.write(rs.getString(1)==null? "":rs.getString(1)); bw.write(" "); bw.write(rs.getString(2)==null? "":rs.getString(2)); bw.write(" "); } bw.flush(); bw.close(); } catch (Exception e) { System.out.println( "Exception occured while getting resultset by the query"); e.printStackTrace(); } finally { try { if (conn != null) { System.out.println("Closing the connection" + conn); conn.close(); } } catch (SQLException e) { System.out.println( "Exception occured while closing the connection"); e.printStackTrace(); } } return objArrayListValue; } i need "one tab space" in between each column(while writing to text file). like manu 25 data1 manc 35 data3 in my code i use bw.write(" ") for creating space between each column. how to use "one tab space" in that place instead of giving "space".

    Read the article

  • How can I load txt file from internet into my jsf app?

    - by Elena
    Hi all! It's me again) I have another problem. I want to load file (for example - txt) from web. I tried to use the next code in my managed bean: public void run() { try { URL url = new URL(this.filename); URLConnection connection = url.openConnection(); bufferedReader = new BufferedReader(new InputStreamReader(connection.getInputStream())); if (bufferedReader == null) { return; } System.out.println("wwwwwwwwwwwwwwwwwwwww"); String str = bufferedReader.readLine(); System.out.println("qqqqqqqqqqqqqqqqqqqqqqq = " + str); while (bufferedReader.readLine() != null) { System.out.println("---- " + bufferedReader.readLine()); } } catch(MalformedURLException mue) { System.out.println("MalformedURLException in run() method"); mue.printStackTrace(); } catch(IOException ioe) { System.out.println("IOException in run() method"); ioe.printStackTrace(); } finally { try { bufferedReader.close(); } catch(IOException ioe) { System.out.println("UOException wile closing BufferedReader"); ioe.printStackTrace(); } } } public String doFileUpdate() { String str = FacesContext.getCurrentInstance().getExternalContext().getRequestServletPath(); System.out.println("111111111111111111111 str = " + str); str = "http://narod.ru/disk/20957166000/test.txt.html";//"http://localhost:8080/sfront/files/test.html"; System.out.println("222222222222222222222 str = " + str); FileUpdater fileUpdater = new FileUpdater(str); fileUpdater.run(); return null; } But the BufferedReader returns the html code of the current page, where i am trying to call managed bean's method. It's very strange thing - I have googled and none have had this problem. Maybe I do something wrong, maybe there us a simplest way to load file into web (jsf) app not using net API. Any ideas? Thanks very much for help! With best wishes)

    Read the article

  • Setup SSL (self signed cert) with tomcat

    - by Danny
    I am mostly following this page: http://tomcat.apache.org/tomcat-6.0-doc/ssl-howto.html I used this command to create the keystore keytool -genkey -alias tomcat -keyalg RSA -keystore /etc/tomcat6/keystore and answered the prompts Then i edited my server.xml file and uncommented/edited this line <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" keystoreFile="/etc/tomcat6/keystore" keystorePass="tomcat" /> then I go to the web.xml file for my project and add this into the file <security-constraint> <web-resource-collection> <web-resource-name>Security</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> When I try to run my webapp I am met with this: Unable to connect Firefox can't establish a connection to the server at localhost:8443. * The site could be temporarily unavailable or too busy. Try again in a few moments. * If you are unable to load any pages, check your computer's network connection. If I comment out the lines I've added to my web.xml file, the webapp works fine. My log file in /var/lib/tomcat6/logs says nothing. I can't figure out if this is a problem with my keystore file, my server.xml file or my web.xml file.... Any assistance is appreciated I am using tomcat 6 on ubuntu.

    Read the article

  • Need software to save videos from 4tube.com - to watch the videos smoothly

    - by Carl
    Isn't there a programs that will capture screen writes at the hardware level? I have tried several Firefox add-ons, and several stand-alone programs, and none of them will save videos from this site. I even paid for Replay Media Catcher, and it didn't work, so I got a refund. (The website for the best Firefox video downloader I have, Downloadhelper, said Replay Media Catcher worked with that site.) I have a slow internet connection, and cannot watch videos smoothly unless I can cache them. This site (4tube.com) doesn't cache, when you restart, it reloads, when you pause it stops - so I need to be able to save the videos to be able to watch them.

    Read the article

  • Simultaneous read/write to RAID array slows server to a crawl

    - by Jeff Leyser
    Fairly beefy NFS/SMB server (32GB RAM, 2 Xeon quad cores) with LSI MegaRAID 8888ELP controlling 12 drives configured into 3 different arrays. 5 2TB drives are grouped into a RAID 6 array. As expected, write performance to the array is slow. However, sustained, simultaneous read/write to the array (wether through NFS or done locally) seems to practically block any other access to anything else on the controller. For example, if I do: cp /home/joe/BigFile /home/joe/BigFileCopy where BigFile is 20G, then even a simple ls /home/jane will take many 10s of seconds to complete. In addition, an ls /backup will also take many tens of seconds, even though /backup is a different array on the same controller. As soon as the cp is done, everything is back to normal. cp /home/joe/BigFile /backup/BigFile does not exhibit this behavior. It's only when doing read/write to the same array.

    Read the article

  • Freeware Local Proxy for Proxy Chaining with HTTPAUTH

    - by pepoluan
    I am looking for a freeware local proxy to perform proxy-chaining with HTTPAUTH. To explain my situation: In my workplace I am forced to keep switching between several internet-connected apps, and thus everytime I have to type in the credentials (or, at least, click on 'OK' to send my previously-saved credential). To make matters more annoying, the proxy login times out every 30 minutes, requiring me to lather-rinse-repeat the whole annoyance. I'd like to just point them all to a locally installed proxy which will on its own perform the required HTTPAUTH against the corporate proxy. I've tried Cntlm, but it always fail to authenticate (and according to this thread, that is due to the proxy using HTTPAUTH which is not supported by Cntlm) Any suggestions? ETA: I found Polipo, but it's kinda wonky on Windows. Especially if I visit a new URL, and the DNS server is a bit slow, then Polipo will simply drop/refuse the connection. And I have to put my password in plaintext. If there's a better suggestion, I'm all ears.

    Read the article

< Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >