Search Results

Search found 8381 results on 336 pages for 'bad neighbor'.

Page 40/336 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Netbeans has Realy Bad FTP support , Forces me to download all Files even before I can continue

    - by Vivek
    Hello Friends , have started using Netbeans recently after using Aptana, phpdesigner and Notepad++ . I love Netbeans for it's speed and it has almost everything I want except for the fact that the FTP support is really Bad. To start working on a FTP server , I have to download all the files to my localhost first which is such a waste of bandwidth & if those files are many ie. 1000+ files then it's really annoying . I have tried mounting remote FTP as local filesystem in Windows and then using Netbeans to access it but that does'nt work out too .. If anybody using netbeans a lot for PHP development can guide me on this , then I would be highly obliged.. this trivial problem is keeping me from using this awesome IDE . Thanks & Regards .

    Read the article

  • Is it considered bad practice to have ViewModel objects hold the Dispatcher?

    - by stiank81
    My WPF application is structured using the MVVM pattern. The ViewModels will communicate asynchronously with a server, and when the requested data is returned a callback in the ViewModel is triggered, and it will do something with this data. This will run on a thread which is not the UI Thread. Sometimes these callbacks involve work that needs to be done on the UI thread, so I need the Dispatcher. This might be things such as: Adding data to an ObservableCollection Trigger Prism commands that will set something to be displayed in the GUI Creating WPF objects of some kind. I try to avoid the latter, but the two first points here I find to be reasonable things for ViewModels to do. So; is it okay to have ViewModels hold the Dispatcher to be able to Invoke commands for the UI thread? Or is this considered bad practice? And why?

    Read the article

  • Is it a bad idea to create tests that rely on each other within a test fixture?

    - by nbolton
    For example: // NUnit-like pseudo code (within a TestFixture) Ctor() { m_globalVar = getFoo(); } [Test] Create() { a(m_globalVar) } [Test] Delete() { // depends on Create being run b(m_globalVar) } … or… // NUnit-like pseudo code (within a TestFixture) [Test] CreateAndDelete() { Foo foo = getFoo(); a(foo); // depends on Create being run b(foo); } … I’m going with the later, and assuming that the answer to my question is: No, at least not with NUnit, because according to the NUnit manual: The constructor should not have any side effects, since NUnit may construct the class multiple times in the course of a session. ... also, can I assume it's bad practice in general? Since tests can usually be run separately. So the result of Create may never be cleaned up by Delete.

    Read the article

  • Is it bad practice to use Reflection in Unit testing?

    - by Sebi
    During the last years I always thought that in Java, Reflection is widely used during Unit testing. Since some of the variables/methods which have to be checked are private, it is somehow necessary to read the values of them. I always thought that the Reflection API is also used for this purpose. Last week i had to test some packages and therefore write some JUnit tests. As always i used Reflection to access private fields and methods. But my supervisor who checked the code wasn't really happy with that and told me that the Reflection API wasn't meant to use for such "hacking". Instead he suggested to modifiy the visibility in the production code. Is it really bad practice to use Reflection? I can't really believe that

    Read the article

  • Is it bad practice to select upstream servers based upon the HTTP method?

    - by PartlyCloudy
    I'm wondering if it is bad practice to have a reverse proxy that selects the upstream server depending on the HTTP method used? The background is that I have an abitrary web server that handles POST requests with some logic behind. The same resources also contain static content, that can be retrieved using GET. After some benchmarking I realized that nginx would handle the static content way faster than my abitrary web server doing this. I checked the option to forward incoming requests internally using nginx, which is feasible. But this would lead to the fact that different servers would serve a distinct resource, only depending on issuing a GET or POST, including different header fields.

    Read the article

  • Why is this consider bad practice? or is it? (ASP.Net)

    - by user318573
    Would this code be considered bad practice: <div id="sidebar"> <% =DisplayMeetings(12) %> </div> This is a snippet of code from the default.aspx of a small web app I have worked on. It works perfectly well, runs very fast, but as always, I am aware of the fact that just because it works, doesn't mean it is OK. Basically, the DisplayMeetings subroutine outputs a bunch of formatted HTML (an unordered list actually), with no formatting, just the requisite html, and then my CSS performs all the necessary formatting. The data for generating the list comes from an SQL server database (the parameter controls how many rows to return) and I am using stored procedures and datareader for fast access. This keeps my front-end extraordinary simple and clean, imho, and lets me do all the work in VB or C# in a separate module. I could of course use a databound repeater (and probably 6 or more other methods) of accomplishing the same thing, but are they any better? Other than loosing the design-time features of VS2010?

    Read the article

  • Is it bad practice to initialize a variable to a dummy value?

    - by froadie
    This question is a result of the answers to this question that I just asked. It was claimed that this code is "ugly" because it initializes a variable to a value that will never be read: String tempName = null; try{ tempName = buildFileName(); } catch(Exception e){ ... System.exit(1); } FILE_NAME = tempName; Is this indeed bad practice? Should one avoid initializing variables to dummy values that will never actually be used? (EDIT - And what about initializing a String variable to "" before a loop that will concatenate values to the String...? Or is this in a separate category? e.g. String whatever = ""; for(String str : someCollection){ whatever += str; } )

    Read the article

  • Are Domain Specific Languages (DSL) bad for the Common Programmer?

    - by iestyn
    I have lately been delving into F# and the new DSL stuff as in the Microsoft SQL Server Modelling CTP, and have some concerns. Will this new idea that will come about be bad for skilled programmers? Is code going to be dumbed down? I know I sound like a luddite, but this does worry me, after spending years of time practising in my craft, and now might be scuttled by genius from within. I am afraid, very afraid. Will I be now trapped in a job that only programs against a DSL and therefore every job that I work on, I have to learn a whole new DSL based on top of a Framework (.net Java), that I will only be allowed to touch certain parts of. I don't think the world is ready for DSL, but the sales pitch is deafening!

    Read the article

  • is it a bad idea to load into memory 160000 variables in a php script?

    - by user1397417
    im processing a large file with sentences, i only care about the lines that have english or japanese, so while im reading the file, if i find english or japanese sentence, i want to just save it in an array and after finished reading, open another file for writting and output all the sentences in the array. this would result in me setting about 160,000 variables. all strings, some short some long. just wondering if its a bad idea to for memeory to set so many values? example line from the file: "1978033 jpn ?????????????????????"

    Read the article

  • How to reduce timeout for bad password on disconnected laptop?

    - by Elroy Flynn
    I use a Windows 7 laptop computer. When not attached to my AD domain, if I enter an incorrect password, I have to wait approximately a full minute before the failure response comes back. When attached to the domain, the response is instant. I think that what's happening is that is that when my entry fails against the cached pw, Windows tries to reach the domain controller and the timeout for that operation is about 60s. Is there a registry entry that controls the timeout? I'd love to reduce it.

    Read the article

  • Bad to be logged in as admin all the time?

    - by poke
    At the office where I work, three of the other members of the IT staff are logged into their computers all the time with accounts that are members of the domain administrators group. I have serious concerns about being logged in with admin rights (either local or for the domain). As such, for everyday computer use, I use an account that just has regular user privelages. I also have an different account that is part of the domain admins group. I use this account when I need to do something that requires elevated privilages on my computer, one of the servers, or on another user's computer. What is the best practice here? Should network admins be logged in with rights to the entire network all the time (or even their local computer for that matter)?

    Read the article

  • Wordpress hacked. Disabled hacked site but bad traffic continues [closed]

    - by tetranz
    Possible Duplicate: My server's been hacked EMERGENCY My Ubuntu 10.04 LTS VPS has been hacked, probably via a WordPress site. I was alerted to it when I noticed the incoming traffic was unusually high. A WordPress site was littered with eval(base64_decode(...)) code in lots of files. My fault, I had some files writeable by www-data which shouldn't have been. I've disabled that site (a2dissite ... and restart Apache). This has reduced it but I am still getting some malware type traffic. My server runs several WordPress and Drupal sites and a home grown PHP site. I have captured traffic with tcpdump and looked at it Wireshark. It's reaching out to the login page of some Joomla sites, trying multiple logins. The traffic stops when I stop Apache. If I a2dissite every site and reload (not restart) Apache the traffic continues. At that point I have no virtual hosts running and no DocumentRoot in my apache2.conf so I don't know how Apache is still running something. I have searched the other sites with grep for likely looking php code with no success. I may have missed it but I haven't found anything suspicious in the Apache logs. I have mod-status running. I haven't really seen anything much there except that someone is still trying to do a POST to the theme page on the disabled WordPress site but they now get a 404. What should I be looking for? Are there any tools or whatever which would give me more info about how Apache is generating that traffic? Thanks

    Read the article

  • HDD is not recognized/initialized via USB, only via SATA - is a reformat through USB a bad idea?

    - by Wuschelbeutel Kartoffelhuhn
    I have a 4TB Hitachi HDD that I purchased in Europe (I use it as a backup disk); I use Windows 7. When I connect it to a SATA port, it is recognized in Windows Explorer and gives no problems, even after transferring 3TB at a time or after being on for days. When I connect it via a SATA-to-USB2.0 adapter, it is also recognized, but when I transfer a large amount of data, it will intermittently stop being recognized by Windows Explorer and cancel the transfer. When I connect it via an external enclosure (which is technically a SATA-to-USB3.0 adapter), it does not display at all in Windows Explorer, but Disk Management will show the drive, albeit uninitialized (prompts for format). I only got the external enclosure because I want to backup my files more conveniently (instead of having to open the computer case each time). Do you advise against reformat/initialization via the external enclosure? Can it screw up things in an irrevocable way (Master Boot Record etc.)?

    Read the article

  • Is mismatched firmware on drives in a raid-6 a bad thing?

    - by bwerks
    Hi all, I recently expanded a raid-1 to a raid-6 with six drives. I ordered all four of the new drives from the same place, and all of them were advertised to be the same drives as the original two--Seagate 15krpm 146gb. However, when I was looking at the drives in the perc6/i utility, one of them appeared to be an earlier firmware version; it had S515, compared to the other five drives with S527. Sure enough, after inspecting the drive itself, the label advertised the earlier firmware version. Running Dell's SAS firmware upgrade utility should have in theory moved them all up to S52A, but when I ran it it moved the S527 drives up to S52A, and left the S515 drive untouched. Is this something to be worried about? If it's something that should be corrected, is there a way to target a particular drive for upgrade since the firmware utility didn't seem to do it by itself?

    Read the article

  • My (C:) drive changed from Basic to Dynamic, is this bad?

    - by bbman225
    I'm really worried here. My computer still runs, so I take this as a good sign, however let me explain my situation: I am trying to install Ubuntu Linux, and the installer was having problems, so I went back into the partitioning tool on my Windows 7 (after having successfully shrunken my C drive and created 55 GB unallocated space) and I attempted to create a new partition out of the 55 GB and make it a simple NTFS drive so that I could let the installer wipe it clean again and format it in whatever file system it prefers. Now, after googling it and running through the process I noticed that all of my drives, including the C drive and the one I just made changed from type "Basic" to type "Dynamic." What is a dynamic drive and should I be worried?

    Read the article

  • Website content hosted with Google. Good or bad?

    - by user305052
    I recently decided to host my styles.css and various scripts on Google Docs and link them into my website. I also have all my images hosted through Picasa so that they too will load much faster and consistently across users. My site has most of its traffic from Japan, Africa, and South America, so I assume there will be a performance boost for my users since my server is hosted in Hong Kong. I (in Canada) have measured my load times to be half of what they used to be. Basically it's a free CDN for my personal stuff. I'm not too sure about all of this yet, so here's my question: what are the caveats of this setup? EDIT: So after rummaging through the ToS of both Picasa and Docs, there doesn't seem to be anything wrong with this kind of use.

    Read the article

  • Either, nginx+php-fpm bad config or nginx+php-fpm cannot handle high query?

    - by The Wolf
    I have wordpress installed in my server configured(hopefully with nginx+php-fpm+mariaDB). I am trying to import using wordpress importer a 1.5MB xml file. Everytime I try to upload it using the importer, it got cut of... meaning just blank screen result.. Here is my error log: actually I just posted 2 of the errors [error] 858#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.xx.xx, server: xxx.com, request: "GET xxxx.html HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.com" [error] 858#0: *13 connect() failed (111: Connection refused) while connecting to upstream, client: xxx.x.xx.xx, server: xxx.com, request: "GET xxxx.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.com" I don't know what is the reason why it can't process the wordpress export .xml. I already increased max_file_upload & etc., but nothing happens. Hope somebody can help me. Here are my conf: nginx.conf user nginx; worker_processes 8; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; server_tokens off; keepalive_timeout 65; fastcgi_read_timeout 500; #gzip on; client_max_body_size 2M; php-fpm.conf ;;;;;;;;;;;;;;;;;;;;; ; FPM Configuration ; ;;;;;;;;;;;;;;;;;;;;; ; All relative paths in this configuration file are relative to PHP's install ; prefix. ; Include one or more files. If glob(3) exists, it is used to include a bunch of ; files from a glob(3) pattern. This directive can be used everywhere in the ; file. include=/etc/php-fpm.d/*.conf ;;;;;;;;;;;;;;;;;; ; Global Options ; ;;;;;;;;;;;;;;;;;; [global] ; Pid file ; Default Value: none pid = /var/run/php-fpm/php-fpm.pid ; Error log file ; Default Value: /var/log/php-fpm.log error_log = /var/log/php-fpm/error.log ; Log level ; Possible Values: alert, error, warning, notice, debug ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes daemonize = no ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf [root@host etc]# vim php-fpm.conf [root@host etc]# vim php-fpm.conf ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes daemonize = no ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf ps aux [root@host etc]# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.1 2900 1380 ? Ss Jun02 0:00 init root 2 0.0 0.0 0 0 ? S Jun02 0:00 [kthreadd/9308] root 3 0.0 0.0 0 0 ? S Jun02 0:00 [khelper/9308] root 124 0.0 0.0 2464 576 ? S<s Jun02 0:00 /sbin/udevd -d root 460 0.0 0.1 35976 1308 ? Sl Jun02 0:00 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5 root 474 0.0 0.0 8940 1028 ? Ss Jun02 0:00 /usr/sbin/sshd root 481 0.0 0.0 3264 876 ? Ss Jun02 0:00 xinetd -stayalive -pidfile /var/run/xinetd.pid root 491 0.0 0.1 6268 1432 ? S Jun02 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/host.busilak.com. mysql 584 0.1 6.8 679072 71456 ? Sl Jun02 0:04 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --use root 586 0.0 0.3 12008 3820 ? Ss Jun02 0:01 sshd: root@pts/0 root 629 0.0 0.0 9140 756 ? Ss Jun02 0:00 /usr/sbin/saslauthd -m /var/run/saslauthd -a pam -n 2 root 630 0.0 0.0 9140 520 ? S Jun02 0:00 /usr/sbin/saslauthd -m /var/run/saslauthd -a pam -n 2 root 645 0.0 0.1 12788 1928 ? Ss Jun02 0:01 sendmail: accepting connections smmsp 653 0.0 0.1 12576 1728 ? Ss Jun02 0:00 sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue root 691 0.0 0.1 7148 1184 ? Ss Jun02 0:00 crond root 698 0.0 0.1 6272 1688 pts/0 Ss Jun02 0:00 -bash root 1006 0.0 0.0 7828 924 ? Ss 00:30 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf nginx 1007 0.0 0.1 8156 1724 ? S 00:30 0:00 nginx: worker process nginx 1008 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1009 0.0 0.1 8020 1356 ? S 00:30 0:00 nginx: worker process nginx 1011 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1012 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1013 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1014 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1015 0.0 0.1 8024 1344 ? S 00:30 0:00 nginx: worker process root 1030 0.0 0.2 25396 2904 ? Ss 00:30 0:00 php-fpm: master process (/etc/php-fpm.conf) apache 1031 0.0 1.9 40700 20624 ? S 00:30 0:00 php-fpm: pool www apache 1032 0.0 2.0 41924 21888 ? S 00:30 0:01 php-fpm: pool www apache 1033 0.0 1.9 41212 20848 ? S 00:30 0:01 php-fpm: pool www apache 1034 0.0 1.9 40956 20792 ? S 00:30 0:01 php-fpm: pool www apache 1035 0.0 2.0 41560 21556 ? S 00:30 0:02 php-fpm: pool www apache 1040 0.0 1.8 39292 19120 ? S 00:30 0:00 php-fpm: pool www root 1125 0.0 0.0 6080 1040 pts/0 R+ 01:04 0:00 ps aux netstat -l [root@host etc]# netstat -l Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 *:ssh *:* LISTEN tcp 0 0 localhost.localdomain:smtp *:* LISTEN tcp 0 0 localhost.locald:cslistener *:* LISTEN tcp 0 0 *:mysql *:* LISTEN tcp 0 0 *:http *:* LISTEN tcp 0 0 *:ssh *:* LISTEN Active UNIX domain sockets (only servers) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 60575947 /var/run/saslauthd/mux unix 2 [ ACC ] STREAM LISTENING 60574168 @/com/ubuntu/upstart unix 2 [ ACC ] STREAM LISTENING 60575873 /var/lib/mysql/mysql.sock Hope somebody can help me to figure out what is the problem.

    Read the article

  • Can a malicious hacker share Linux distributions which trust bad root certificates?

    - by iamrohitbanga
    Suppose a hacker launches a new Linux distro with firefox provided with it. Now a browser contains the certificates of the root certification authorities of PKI. Because firefox is a free browser anyone can package it with fake root certificates. Thus a fake root certificate would contain a the certification authority that is not actually certified. Can this be used to authenticate some websites. How? Many existing linux distros are mirrored by people. They can easily package software containing certificates that can lead to such attacks. Is the above possible? Has such an attack taken place before?

    Read the article

  • Can a Linksys Router be the cause of bad speeds on a 1.5 mbps link.

    - by gramware
    We use a Linksys 5-port router at a smal organization with about 20 employees. We recently acquired a 1.5 mbps fibre link, but sometimes the link goes down and speeds are still low. On enquirey from the ISP, this was part of the response, However there maybe throttling due to the router in place. A Linksys is a low end router and may be unable to carried traffic of up to 1536Kbps. We are in a position to deploy a Cisco 871 router on test for 2 wks to eliminate that possibility. Also kindly advise the destination of the ping results they look to high. How true is that about the router throttling the network and need for a bigger one.

    Read the article

  • Why is it a bad idea to use multiple NAT layers or is it?

    - by iamrohitbanga
    The computer network of an organization has a NAT with 192.168/16 IP address range. There is a department with a server that has an IP address 192.168.x.y and this server handles hosts of this department with another NAT with the IP address range 172.16/16. Thus there are 2 layers of NAT. Why don't they have subnetting instead. This would allow easy routing. I feel multiple layers of NAT can cause performance losses. Could you please help me compare the two design strategies.

    Read the article

  • Nagios check_bgp_neighbors plugin showing critical status

    - by user141610
    I am trying to configure nagios check_bgp_neighbors plug-in on Ubuntu and followed README file of check_bgp_neighbors plug-in. I have made following changes: define command{ command_name check_bgp_all command_line $USER1$/check_bgp_neighbors -H $HOSTADDRESS$ -C $USER3$ -n $ARG1$ -n $ARG2$ } to define command{ command_name check_bgp_all command_line /usr/local/nagios/libexec/check_bgp_neighbors.sh -H xx.xx.xx.49 -C snmpName -n xx.xx.xx.50 And define service{ use server-service hostgroup_name svc-bgp1 service_description BGP Check 1 check_command check_bgp_all!10.0.0.1!172.16.0.2 } to define service{ use generic-service hostgroup_name svc-bgp1 service_description BGP Check 1 check_command check_bgp_all!xx.xx.xx.50 } xx.xx.xx.49 is the IP of the host router and xx.xx.xx.50 is the IP of eBGP neighbour. Status information: line: neighbor:xx.xx.xx.50:sent:78838:received:9769 Failed: status:6 prefixes:16 sent:0 received:1 Log [1353997904] SERVICE NOTIFICATION: router1;router1;BGP CHECK 2;CRITICAL;notify-service-by-email;line: neighbor:103.7.248.50:sent:78842:received:9772 [1353997904] SERVICE NOTIFICATION: router1;router1;BGP CHECK 2;CRITICAL;notify-service-by-sms;line: neighbor:103.7.248.50:sent:78842:received:9772 Why does it show critical status???? I am not getting response for this question, if you need additional information please mention it in comment.

    Read the article

  • Bad Resolution. Running Three screens at the same time

    - by Carl
    Hi I am currently using three screens at the same time with my ATI 5770 & an active displayport converter. The thing is that the third screen ( the one that is using the active displayport converter ) is showing terrible resolution compared to my other two screens. ( Samsung syncmaster p23 ) two of my screens got a max resolution of 1920* 1090, meanwhile the third on is only capable of 1600 * 1200. Do any of u guys got any solution to this problem? Btw, This is how the Active Displayport converter looks like http://accessories.us.dell.com/sna/products/Cables/productdetail.aspx?c=us&l=en&s=dhs&cs=19&sku=330-5521#Overview

    Read the article

  • Driver Max - Updating drivers that are not digitally signed - good or bad?

    - by Paul
    Win7 Home Prem 32-bit I am currently using DriverMax to keep my drivers up to date. Sometimes it suggests that a newer driver is available for download but the driver is not digitally signed. Is it safe to update to an unsigned driver or not? What are the implications of signed vs unsigned drivers? I always create a system restore point before updating any drivers anyway and i know i can rollback a driver.

    Read the article

  • Is sending email from EC2 / Rackspace Cloud a bad idea?

    - by Michael Buckbee
    In this article it mentions that TrendMicro is now treating all emails from Amazon's EC2 as coming from "Dial Up Users": likely to be spam and this is creating severe deliverability issues with their emails. We're having all kinds of issues sending email from our app servers on Rackspace cloud (which may or may not be DUL'd) and I wonder if this isn't just a losing battle and we should try to get a different host for our SMTP server.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >