Search Results

Search found 20168 results on 807 pages for 'macgreggor at your service'.

Page 575/807 | < Previous Page | 571 572 573 574 575 576 577 578 579 580 581 582  | Next Page >

  • How do I run strace or ltrace on Tomcat Catalina?

    - by flashnode
    Running ltrace isn't trivial. This RHEL 5.3 system has based on a Tomcat Catalina (servlet container) which uses text scripts to tie everything together. When I tried to find an executable here's the rabbit hole I went down: /etc/init.d/pki-ca9 calls dtomcat5-pki-ca9 ]# Path to the tomcat launch script (direct don't use wrapper) TOMCAT_SCRIPT=/usr/bin/dtomcat5-pki-ca9 /usr/bin/dtomcat5-pki-ca9 calls a watchdog program /usr/bin/nuxwdog -f $FNAME I replaced nuxwdog with a wrapper [root@qantas]# cat /usr/bin/nuxwdog #!/bin/bash ltrace -e open -o /tmp/ltrace.$(date +%s) /usr/bin/nuxwdog.bak $@ [root@qantas]# service pki-ca9 start Starting pki-ca9: [ OK ] [root@qantas]# cat /tmp/ltrace.1295036985 +++ exited (status 1) +++ This is ugly. How do I run strace or ltrace in tomcat?

    Read the article

  • How to Setting up Amazon EC2 with own OS and DB?

    - by SLim
    i got my own version of OS and DB which are window server 2008 Hyper-V R2 and Sql server R2 2008 both in enterprise version may i know how to configure it up and running ? with amazon EC2, what other is a must combination to make it run ? also how could i install the operating system and DNS ? i never doing server before, but i just need something like VPS to support my development and testing. Amazon Ec2 seem the best and cheapest service due to only $1 per hour.

    Read the article

  • Getting the general log to work in MySQL 5.6.8

    - by Benjamin
    I can't get the general log to work in this version of MySQL. I added the following lines to /usr/my.cnf: general_log = 1 general_log_file = "/var/log/mysql.log" Then restarted the server: [root@localhost ~]# service mysql restart Shutting down MySQL.. SUCCESS! Starting MySQL. SUCCESS! The settings seem to be taken into account: mysql> SHOW VARIABLES LIKE 'general_log%'; +------------------+--------------------+ | Variable_name | Value | +------------------+--------------------+ | general_log | ON | | general_log_file | /var/log/mysql.log | +------------------+--------------------+ 2 rows in set (0.01 sec) But the log is never created: [root@localhost ~]# mysqladmin flush-logs [root@localhost ~]# ls -al /var/log/mysql.log ls: cannot access /var/log/mysql.log: No such file or directory Any idea why?

    Read the article

  • REST-based file server

    - by Chris Wenham
    I need to be able to PUT files and GET them later using nothing but HTTP, so I went searching for something that might match the terms "REST file server" or "HTTP file server" or "REST drop-box", etc. Unfortunately, these terms bring up the wrong kind of results on Google. What I want is the equivalent of an SMB fileshare over HTTP. Some ideal features: Can PUT a file of any type at http://servername/service/any/path/I/want/document.pdf Anyone with access can GET that file at the URL I PUT it at Supports AV scanning on any new file that has been PUT Supports DELETE of existing resources (files) Our shop runs Windows, but I'd be interested to know about Unix software that can do this kind of thing, too. It's to be used in an IT department for private users only. It won't be on a public-facing IP address. Does anything like this exist?

    Read the article

  • What is the best way for an experienced developer to work on a WordPress blog

    - by nanothief
    I'm beginning to work on my first WordPress blog, however I've noticed most tutorials just have you do modifications (such as theme changes, installing plugins) on the production site. This worries me for a few reasons: No backups No version control If you make a mistake, your production site is affected Developing remotely is slower than local development, especially when tweaking css files. I understand why WordPress works like this - it allows people with no development experience to manage their WordPress installation (or the one provided by their service provider). It also allows you to work on the WordPress installation without having ssh access to the server. However as I am confortable working with tools like git and ssh, and am using a virtual server for the blog, this isn't very important to me. So I was wondering what techniques experienced developers use when working on a WordPress blog. For example: Do you develop locally, then push the changes to the live site? How do you do this? How do you manage database changes and backups? What do you store under version control (if anything)? If a plugin changes the database, do you somehow track the changes it does in version control, so you can rollback the changes done by the plugin if you need to? Or maybe I'm just overcomplicating everything if working on the production site isn't as risky as I am thinking it would be. I would appreciate any answers either way.

    Read the article

  • Client can't reach my production webserver. It's their ISP's fault, but now what?

    - by MikeN
    I have a customer in Michigan who can't access my production SaaS webserver that is hosted on Slicehost. All other companies across the US/Canada/Europe have no problem reaching the site. This problem is occuring intermittantly, and Slicehost customer service says it's a problem with the client's ISP. I got the IP address of my client, and ping'ing that IP address from my PROD server fails, but ping'ing the IP address from my dev box or our seperate blog server (also hosted on slicehost) works. How do I debug a problem like this? I asked the client to reach out to their local ISP and ask about this problem. A traceroute shows that the packets are getting stopped on a Comcast Michigan node which is the client's ISP. Is there anything I can do additionally to fix this problem for my client?

    Read the article

  • Using nginx and/or varnish to cache server-generated 301 redirects

    - by rlotun
    I'm implementing a sort of url-shortener service. What happens is that I have some backend app server that takes in a request, does some computation and returns a 301 redirected url back upstream to an nginx frontend: request ---> nginx ----> app_server What I want to be able to do is cache this returned 301 url for the same request (a specific url with a "short code"). Does nginx do this caching automatically? Or should I drop in something like varnish in between nginx and the app_server? I can easily cache this in memcache, but that would require hitting the app_server, which I'm sure can be dispensed with after the first request. Thanks.

    Read the article

  • SearchServer2008Express Search Webservice

    - by Mike Koerner
    I was working on calling the Search Server 2008 Express search webservice from Powershell.  I kept getting <ResponsePacket xmlns="urn:Microsoft.Search.Response"><Response domain=""><Status>ERROR_NO_RESPONSE</Status><DebugErrorMessage>The search request was unable to connect to the Search Service.</DebugErrorMessage></Response></ResponsePacket>I checked the user authorization, the webservice search status, even the WSDL.  Turns out the URL for the SearchServer2008 search webservice was incorrect.  I was calling $URI= "http://ss2008/_vti_bin/spsearch.asmx?WSDL"and it should have been$URI= "http://ss2008/_vti_bin/search.asmx?WSDL"Here is my sample powershell script:# WSS Documentation http://msdn.microsoft.com/en-us/library/bb862916.aspx$error.clear()#Bad SearchServer2008Express Search URL $URI= "http://ss2008/_vti_bin/spsearch.asmx?WSDL"#Good SearchServer2008Express Search URL $URI= "http://ss2008/_vti_bin/search.asmx?WSDL"$search = New-WebServiceProxy -uri $URI -namespace WSS -class Search -UseDefaultCredential $queryXml = "<QueryPacket Revision='1000'>  <Query >    <SupportedFormats>      <Format revision='1'>urn:Microsoft.Search.Response.Document.Document</Format>    </SupportedFormats>    <Context>      <QueryText language='en-US' type='MSSQLFT'>SELECT Title, Path, Description, Write, Rank, Size FROM Scope() WHERE CONTAINS('Microsoft')</QueryText>      <!--<QueryText language='en-US' type='TEXT'>Microsoft</QueryText> -->    </Context>  </Query></QueryPacket>" $statusResponse = $search.Status()write-host '$statusResponse:'  $statusResponse $GetPortalSearchInfo = $search.GetPortalSearchInfo()write-host '$GetPortalSearchInfo:'  $GetPortalSearchInfo $queryResult = $search.Query($queryXml)write-host '$queryResult:'  $queryResult

    Read the article

  • Xmail shows error at Gmail

    - by Karthik Malla
    Hello I am using Xmail server at IP 65.75.241.26 hosted at www.softmail.me using which I can send and receive email to/from all email service providers but unable to receive back from Gmail. This error lies only with Gmail. Experts says that Gmail requires proper working MX records where as for other emailing providers do not bother much. Is MX records the solution for my problem? Or is there any other issue? My mailserver outgoing at softmail.me:25 My mailserver incoming at softmail.me:110 Please help me how to get rid of this issue.

    Read the article

  • Best practice for scaling a single application source to multiple nodes

    - by Andrew Waters
    I have an application which needs to scale horizontally to cover web and service nodes (at the moment they're all on one) but interact with the same set of databases and source files (both application code and custom assets). Database is no problem, it's handled already with replication in MongoDB. Also, the configuration of the servers are the same (100% linux). This question is literally about sharing a filesystem between machines so that its content is always correct, regardless of the node accessing it. My two thoughts have so far been NFS and SAN - SAN being prohibitively expensive and NFS seeing some performance issues on the second node with regards to glob()ing in PHP. Does anyone have recommended strategies or other techniques that don't involved sharding data across nodes or any potential gotchas in NFS that may cause slow disk seek times? To give you an idea of the scale, the main node initialises it's application modules in ~ 0.01 seconds. The secondary is taking ~2.2 seconds. They're VM's inside a local virtual network in ESXi and ping time between them is ~0.3ms

    Read the article

  • Restarting rsyslog re-sends logs again

    - by Jay Taylor
    I am running Ubuntu 12.04.1 LTS on EC2. I have a bunch of application servers which are configured to forward their logs to a central server via rsyslog. Since putting in Nagios monitoring on the log files on the central server, I've been getting alerts indicating that particular application servers are failing to forward their logs to the centralized server. Logging into the machines and restarting the rsyslog service fixes the problem. However, rsyslog then re-transmits the logs again, resulting in duplicates on the collector. Why is it doing this?

    Read the article

  • Hybrid Graphics on Ubuntu 12.04 switching to discrete

    - by cfstras
    I have a Sony Vaio VPCCB-27FX with hybrid graphics. Using vgaswitcheroo enables me to switch my discrete card off to save power. Now when i want to switch to the discrete card for performance, my system freezes. I already tried logging out and killing x with service lightdm stop, but still, it freezes as soon as I echo DIS > switch. typing blindly, echo IGD > switch returns me to my console where it reads [ 179.555171] i915: switched off, but it seems the discrete card never gets switched on... running echo DDIS > switch gives me the following: [540....] [drm:atop_op_jump] *ERROR* atombios stuck in loop for more than 5secs aborting [540....] [drm:atom_execute_table_locked] *ERROR* atombios stuck executing CEE2 (len 62, WS 0, PS 0) @ 0xCEFE [540....] [drm:atom_execute_table_locked] *ERROR* atombios stuck executing BBF6 (len 1036, WS 4, PS 0) @ 0xBCF3 [540....] [drm:atom_execute_table_locked] *ERROR* atombios stuck executing BB8C (len 76, WS 0, PS 0) @ 0xBB94 [541....] [drm:r600_RING_TEST] *ERROR* radeon: ring test failed (scratch(0x8504)=0xFFFFFFFF) [541....] [drm:evergreen_resume] *ERROR* evergreen startup failed on resume after that, the atombios part repeats a few times. also, the terminal locks up again and sysrq+REISUB is my only rescue. Has anybody an idea how I can switch to my discrete card without the system locking up? #uname -srvmpio Linux 3.2.0-24-generic #39-Ubuntu SMP Mon May 21 16:52:17 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux #lsb_release -r Description: Ubuntu 12.04 LTS

    Read the article

  • Streamline Active Directory account creation via automated web site

    - by SteveM82
    In my company we have high employee turnover, and hence our helpdesk receives about a dozen requests per week for new Active Directory accounts. Currently, we receive these requests simply via e-mail or voice-mail, and rarely do we have all of the information necessary to create the account. I would like to find a web application that can be used by a manager or supervisor to formalize the requests they make for AD accounts for new employees under their command. Ideally, the application would prompt for all of necessary information, and allow the helpdesk to review the requests and approve or deny each one. If approved, the application would take care of creating the account and send an e-mail to the manager. I have found several application on the Internet that handle self-service account management (i.e., password resets or update contact info), which is also nice to have, but nothing that streamlines the new account request and creation part. Can anyone make suggestions on such an application? Thanks.

    Read the article

  • Why would Java app make RPC call to itself?

    - by amphibient
    I am working with a multithreaded homegrown multi-module app in my new job. We use the the Thrift protocol to communicate RPC calls between different stand-alone applications in a distributed system. One of them listens on multiple ports and I just noticed that it actually makes an RPC call to itself from one thread invoked from one socket it listens to (web service call) to another port within the same app. I verified that it could accomplish the same thing if it just went and directly called the method that the remote procedure ultimately invokes as it is all within the same application, same JVM. To make it even more mysterious, the call is completely synchronous, i.e. no callbacks involved. The first thread totally sits and waits until it makes a call across the wire to itself and comes back. Now, I am perplexed why anybody would do it this way. It seems like calling somebody on the phone that sits in the same room as you do. Can anybody provide an explanation why the developer before me would come up with the above mentioned model? Maybe there is a reason and I am missing something.

    Read the article

  • My server won't send emails

    - by Tomcomm
    I've been sent here by the people at stackoverflow OK so I know I'm using the right code becuase I have it working on another server but when I try to send an email from a webpage on this particular server using php I get a success message back but the email never gets through. In /var/log/maillog I see Sep 11 14:20:28 ela1 postfix/smtp[11496]: CEE83E151FD: to=[My email address here], relay=none, delay=40, delays=0.08/0.01/40/0, dsn=4.4.3, status=deferred (Host or domain name not found. Name service error for name=btopenworld.co.uk type=MX: Host not found, try again) Can anyone help?

    Read the article

  • Server 2008 locks me out when not using the machine for 10 minutes after installing SP2

    - by Daniel Magliola
    I have recently installed Service Pack 2 on my Windows Server 2008 machine (which I use actively for development, and i'm always logged on to). Now, after this installation, when I don't use the machine for some time (let's say, 10 minutes), it locks itself so I have to press Ctrl+Alt+Del and log back in. I have already checked the Screen Saver settings, and it's "None", as it always has been. I also looked into power settings and everything looks right (20 mins to turn off monitor, and i haven't found any settings regarding locking me in there). Do you have any idea what I can do so that it won't lock me out after not using the machine for a while? Thanks! Daniel

    Read the article

  • Xorg becomes unkillable at 3AM

    - by chew socks
    Most nights, some time in the hour of 3AM my xorg process will increase to 100% cpu and gpu load will also increase to 100%. The process also becomes unkillable. I cannot sudo kill -9 it or get back control with sudo service lightdm restart. I also cannot switch to to a tty screen with ctrl + alt + f1. To reboot I have to log in with ssh, but this is not perfect because if I reboot while it is doing this my ZFS pool will fail to mount when it comes back up ( that is where my /home is ). Does anyone have any ideas as to why I can't stop and restart xorg, or even better, know why this is happening? Thanks NOTE: For anyone who comes looking for the same problem. I disabled catalyst AI and made it through the night. I've been up for 1 day 3 hours now. My record for this month is 2 days and 19 hours without a problem. My all time record is 6 days without a crash. I'll post here if it crashes again or I'm able to set a new record.

    Read the article

  • recent unreliable wireless connection

    - by gabkdlly
    Recently, my internet connection over wireless ( via a Netgear KWGR614 router ) has become unreliable, on both a Dell laptop running Ubuntu 10.04 as well as my Desktop running Ubuntu 10.10 . The problem does not seem to occur on a laptop running Windows Vista, nor on a Desktop running Windows 7 ( this machine is connected with an ethernet cable ). The problem does not seem to occur on my Openmoko Freerunner ( running Android 1.5 ), though I hardly ever use this device to connect over WLAN, so the problem may have just slipped by. On my main Ubuntu Desktop, I have tried the following wireless devices: a Longshine PCI card ( an old device with an RTL8180L chip ) a D-Link DWL-510 PCI card ( this device threw warnings in dmesg ) a USB device from MSI ( US54EX ). Usually my wireless network shows up in the network manager with a normal signal strength, even when the connection speed is slow or the connection gets reset ( asking me to click connect to re-authenticate my wireless connection ). I have observed this problem with a Netgear KWGR614 Router ( with the manufacturers firmware ), as well as with a TP-LINK TL-WR741ND router running OpenWrt. Taking a look at my routers logs, I find many instances of the following line: Tuesday,04 Jan 2011 03:53:01 [TCP SYN Flood][Deny access policy matched, dropping packet] I know that the Netgear router is susceptible to denial of service attacks, as I have previously been able to disrupt its operation by putting an nmap scan into a while loop. I use WEP or WPA to encrypt the wireless network. Is it possible that someone is jamming my signal ?

    Read the article

  • Exalogic Elastic Cloud Software (EECS) version 2.0.1 available

    - by JuergenKress
    We are pleased to announce that as of today (May 14, 2012) the Exalogic Elastic Cloud Software (EECS) version 2.0.1 has been made Generally Available. This release is the culmination of over two and a half years of engineering effort from an extended team spanning 18 product development organizations on three continents, and is the most powerful, sophisticated and comprehensive Exalogic Elastic Cloud Software release to date. With this new EECS release, Exalogic customers now have an ideal platform for not only high-performance and mission critical applications, but for standardization and consolidation of virtually all Oracle Fusion Middleware, Fusion Applications, Application Unlimited and Oracle GBU Applications. With the release of EECS 2.0.1, Exalogic is now capable of hosting multiple concurrent tenants, business applications and middleware deployments with fine-grained resource management, enterprise-grade security, unmatched manageability and extreme performance in a fully virtualized environment. The Exalogic Elastic Cloud Software 2.0.1 release brings important new technologies to the Exalogic platform: Exalogic is now capable of hosting multiple concurrent tenants, business applications and middleware deployments with fine-grained resource management, enterprise-grade security, unmatched manageabi! lity and extreme performance in a fully virtualized environment. Support for extremely high-performance x86 server virtualization via a highly optimized version of Oracle VM 3.x. A rich, fully integrated Infrastructure-as-a-Service management system called Exalogic Control which provides graphical, command line and Java interfaces that allows Cloud Users, or external systems, to create and manage users, virtual servers, virtual storage and virtual network resources. Webcast Series: Rethink Your Business Application Deployment Strategy Redefining the CRM and E-Commerce Experience with Oracle Exalogic, 7-Jun@10am PT & On-Demand: ‘The Road to a Cloud-Enabled, Infinitely Elastic Application Infrastructure’ (featuring Gartner Analysts). WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: ExaLogic Elastic Cloud,ExaLogic,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress,ExaLogic 2.0.1

    Read the article

  • How to solve: "Connect to host some_hostname port 22: Connection timed out"

    - by Aufwind
    I have two Ubuntu machines. Both have openssh-client and openssh-server installed on them. ssh-ing from machine G (fresh Ubuntu 11.10 installation) to machine K works great. But ssh-ing from machine K to machine G results always in the Error: Connect to host some_hostname port 22: Connection timed out I went through the troubleshooting section of help.ubuntu.com and I got the following results: ps -A | grep sshd # results in 848 ? 00:00:00 sshd - sudo ss -lnp | grep sshd # results in 0 128 :::22 :::* users:(("sshd",848,4)) 0 128 *:22 *:* users:(("sshd",848,3)) - ssh -v localhost # works! - sudo ufw status verbose # yields: "Status: inactive" I haven't change anything in the config file. What can I do to locate the Problem and solve it? Glad about every hint! Edit: ping was succesful in both directions! I did a telnet <machineK> 22 from machin G which resulted in Trying and then in telnet: Unable to connect to remote host: Connection timed out. But telnet the other way around worked just fine! Edit 2: ssh start/running, process 966 # yields: ssh start/running, process 966 /etc/hostname # contains my hostname, let's call it blubb /etc/hosts # contains the following 127.0.0.1 localhost # 127.0.1.1 blubb 129.26.68.74 blubb # I added this! - sudo service ufw status # yields: ufw start/running I installed Gufw and set it to ON. Then I selected from Incoming the option ALLOW. Then I sshed to another machine from where I sshed back to my machine. Still the same error as above: connect to host blubb port 22: Connection timed out Any more hints, what I can check?

    Read the article

  • glassfish - Unknown error when trying port 4848

    - by Majid Azimi
    I'm installing glassfish 3.1 on Windows XP service pack 3. but in configuration step it gives this error: PERFORMING THE REQUIRED CONFIGURATIONS ______________________________________ CREATING DOMAIN _______________ Executing command :C:\glassfish3\glassfish\bin\asadmin.bat --user admin --passwordfile C:\DOCUME~1\MAJIDA~1\LOCALS~1\Temp\glassfish-3.1-windows-ml.exe6\asadminTmp1079044298673991344.tmp create-domain --savelogin --checkports=false --adminport 4848 --instanceport 8080 --domainproperties=jms.port=7676:domain.jmxPort=8686:orb.listener.port=3700:http.ssl.port=8181:orb.ssl.port=3820:orb.mutualauth.port=3920 domain1 C:\glassfish3\glassfish\bin\asadmin.bat --user admin --passwordfile C:\DOCUME~1\MAJIDA~1\LOCALS~1\Temp\glassfish-3.1-windows-ml.exe6\asadminTmp5898014821156752751.tmp create-domain --savelogin --checkports=false --adminport 4848 --instanceport 8080 --domainproperties=jms.port=7676:domain.jmxPort=8686:orb.listener.port=3700:http.ssl.port=8181:orb.ssl.port=3820:orb.mutualauth.port=3920 domain1Unknown error when trying port 4848. Try a different port number. Command create-domain failed. CLI130 Could not create domain, domain1 I change 4848 to any other port. but it doesn't work. firewall is completely disabled. Could anyone help?

    Read the article

  • ORM and component-based architecture

    - by EagleBeek
    I have joined an ongoing project, where the team calls their architecture "component-based". The lowest level is one big database. The data access (via ORM) and business layers are combined into various components, which are separated according to business logic. E.g., there's a component for handling bank accounts, one for generating invoices, etc. The higher levels of service contracts and presentation are irrelevant for the question, so I'll omit them here. From my point of view the separation of the data access layer into various components seems counterproductive, because it denies us the relational mapping capabilities of the ORM. E.g., when I want to query all invoices for one customer I have to identify the customer with the "customers" component and then make another call to the "invoices" component to get the invoices for this customer. My impression is that it would be better to leave the data access in one component and separate it from business logic, which may well be cut into various components. Does anybody have some advice? Have I overlooked something?

    Read the article

  • BizTalk 2009 - How do I do t"HAT"?

    - by StuartBrierley
    In my previous life working with BizTalk Server 2004, I came to view HAT (the Health and Activity Tracking tool) as one of my first ports of call in the case of problems with any of our BizTalk solutions.  When you move to BizTalk Server 2009 it is quickly apparent that HAT is no longer with us. HAT was useful in BizTalk 2004 mainly as it provided developers and administrators with a number of useful queries and views of what was going on inside BizTalk at runtime; when and what type of messages were received and sent, what messages had been suspended, what orchestration were running or suspended, you could even follow the process flow of a message or orchestration to see what was going on. With BizTalk Server 2009 much of the functionality of HAT can now be found in the BizTalk Administration console.  Select a BizTalk Group and you will be shown the Group Hub Overview page.  This provides a number of default queries that replicate some of those found in the old HAT. You can also use the Group Hub page to create new queries.  These can then be saved and loaded in other Group Hub instances - useful for creating queries in development for later use in Test, Psuedo-Live and Live environments. In the next few posts I am going to look at some of the common queries that we might miss from HAT and recreate them (or something close) using the new query option. Messages - last 100 received Messages - last 100 sent Messages - last 50 suspended Service instances - last 100 I have yet to try the updated Admin-HAT-Console in anger, and after using old-HAT for so long it may take some getting uesd to, but so far I would say that moving the HAT functionality into the BizTalk Administration console was probably the correct way to go.  Having one tool as the place to look for the combined functionality on offer certainly seems to be the sensible option.

    Read the article

  • Why is my query soooooo slow?

    - by geekrutherford
    A stored procedure used in our production environment recently became so slow it cause the calling web service to begin timing out. When running the stored procedure in Query Analyzer it took nearly 3 minutes to complete.   The stored procedure itself does little more than create a small bit of dynamic SQL which calls a view with a where clause at the end.   At first the thought was that the query used within the view needed to be optimized. The query is quite long and therefore easy to jump to this conclusion.   Fortunately, after bringing the issue to the attention of a coworker they asked "is there a where clause, and if so, is there an index on the column(s) in it?" I had no idea and quickly said as much. A quick check on the table/column utilized in the where clause indicated indeed there was no index.   Before adding the index, and after admitting I am no SQL wiz, I checked the internet for info on the difference between clustered and non-clustered indexes. I found the following site quite helpful OdeToCode. After adding the non-clustered index on the column, the query that used to take nearly 3 minutes now takes 10 seconds! Ah, if only I'd thought to do this ahead of time!

    Read the article

  • DDoS attacks to PBX

    - by user316687
    I'm wondering if DDOS attacks to PBX or telecommunications systems is possibe real. According to this links: http://threatpost.com/en_us/blogs/firm-sees-more-ddos-attacks-aimed-telecom-systems-073112 http://news.softpedia.com/news/DDOS-Attacks-Against-Telecom-Systems-Cost-as-Little-as-20-16-Per-Day-284875.shtml it is possible. There are DDOs attacks to web servers, which mostly give them so much concurrent loads or connections that service get unavailable. Many government or non-profit organizations that suffered this kind of attacks, eventually could choose to shutdown their web server and that's it, waiting for these attacks to end. For a DDOs attacks to PBX, I imagine that it would result in telephones getting busy or ringing all the time unstoppably. This kind of attack could really damage any kind organization. Is it possible to do that or are we just in the beginnings?

    Read the article

< Previous Page | 571 572 573 574 575 576 577 578 579 580 581 582  | Next Page >