Search Results

Search found 15423 results on 617 pages for 'uses clause'.

Page 294/617 | < Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >

  • Windows 7 root certificate updates

    - by hstr
    I work for a company that uses Windows 7 for end user computing. The Windows 7 computers are updated via a WSUS installation, and access to Microsoft Update is blocked. We have a problem with a number of websites, who's certificates appears to be invalid, though they are perfectly ok. The problem is, that Windows 7 apparently does an on-demand update of root certificates through Windows Update, rather than rolling out a monthly update, as with Windows XP. Now that Windows Update is blocked, how should root certificates be updated? It appears that WSUS is not handling this feature. Thanks in advance.

    Read the article

  • Named my RPi 512MB @jerpi_bilbo

    - by hinkmond
    To keep our multiple Raspberry Pi boards apart from each other, I've now named my RPi Model B w/512MB: "jerpi_bilbo", which stands for Java Embedded Raspberry Pi - Bilbo (named after the Hobbit from the J.R.R. Tolkien stories). I also, set up a Twitter account for him. You can follow him at: @jerpi_bilbo He's self-tweeting, manual prompted so far (using Java Embedded 7.0 and twitter4j Java library). Works great! I'm setting him up to be automated self-tweeting soon, so watch for that... Here's a pointer to the open source twitter4j Java library: download here Just unzip and extract out the twitter4j-core-2.2.6.jar and put it on your Java Embedded classpath. Here's how @jerpi_bilbo uses it to Tweet with his Java Embedded runtime: import twitter4j.*; import java.io.* public final class Tweet { public static void main(String[] args) { String statusStr = null; if ((args.length 0) && (args[0] != null)) { statusStr = args[0]; } else { statusStr = new String("Hello World!"); } // Create new instance of the Twitter class Twitter twitter = new TwitterFactory().getInstance(); try { Status status = twitter.updateStatus(statusStr); System.out.println ("Successfully updated the status to: " + status.getText()); } catch (Exception e) { e.printStackTrace(); } } } That's all you need. Java Embedded rocks the RPi! And, @jerpi_bilbo is alive... Hinkmond

    Read the article

  • Any good reason to open files in text mode?

    - by Tinctorius
    (Almost-)POSIX-compliant operating systems and Windows are known to distinguish between 'binary mode' and 'text mode' file I/O. While the former mode doesn't transform any data between the actual file or stream and the application, the latter 'translates' the contents to some standard format in a platform-specific manner: line endings are transparently translated to '\n' in C, and some platforms (CP/M, DOS and Windows) cut off a file when a byte with value 0x1A is found. These transformations seem a little useless to me. People share files between computers with different operating systems. Text mode would cause some data to be handled differently across some platforms, so when this matters, one would probably use binary mode instead. As an example: while Windows uses the sequence CR LF to end a line in text mode, UNIX text mode will not treat CR as part of the line ending sequence. Applications would have to filter that noise themselves. Older Mac versions only use CR in text mode as line endings, so neither UNIX nor Windows would understand its files. If this matters, a portable application would probably implement the parsing by itself instead of using text mode. Implementing newline interpretation in the parser might also remove some overhead of using text mode, as buffers would need to be rewritten (and possibly resized) before returning to the application, while this may be less efficient than when it would happen in the application instead. So, my question is: is there any good reason to still rely on the host OS to translate line endings and file truncation?

    Read the article

  • Custom host file for Firefox

    - by acidzombie24
    Instead of changing my host file I'd like Firefox to think my domain is on my own server while my other browser uses the real IP address. I used to edit my host files but that forces both browsers to change the IP address. I found change host, but it doesn't appear to use the alternative host file. I also saw a comment asking when it will work on Firefox 6+. I tried Host Admin and it fails. It works but the alternative IP address must be in your host file already (which I don't want) and it lets you deselect a domain so the host file is ignored which is not what I want.

    Read the article

  • How do I uninstall the TuxOnIce kernel in 12.04?

    - by Lluis
    I recently installed tuxonice on a Toshiba z830. I have ubuntu 12.04 (kernel was: 3.2.0-26) I wanted to be able to hibernate, which I consider to be a basic thing a OS should allow you to do. Well, it didn't work...but they already tell you it may not so I removed it. For doing all these I followed: Problem with Hibernation After uninstalling I switched off the laptop and after this I started to have several problems. The first one was that Cisco VPN didn't work anymore and then I realised that I could not even suspend my laptop. I found very strange that after removing tuxonice I still had this: /lib/modules/3.2.0-26-generic-tuxonice/ The VPN problem could be solved by just copying from my previous kernel: 3.2.0-26-generic/CiscoVPN/ into the tuxonice one. Not very elegant but works. Now, for the suspend problem (and the previous too) I can hold Shift when starting and select my old kernel and then suspend works again. In my opinion tuxonice was not correctly uninstalled as it left that kernel behind and worse: ubuntu uses it if I do not take action. I have these work arounds....and here is my question: how can I delete this tuxonice kernel safely? If you need more info please let me know.

    Read the article

  • Unable to open websites that use HTTPS on linux

    - by negai
    I have the following network configuration: My PC 192.168.1.20/24 uses 192.168.1.1/24 as a gateway. Dlink-2760U router with Local address 192.168.1.1/24 has a VPN connection open with the provider using PPTP. Whenever I'm trying to open some web-sites that has some authorization (e.g. gmail.com, coursera.org), I'm getting a request timeout. This problem is observed mostly on linux (Ubuntu 12.04 and Debian 6.0), while most of such websites work correctly on windows XP. Could you please help me diagnose the problem? Could it be related to NAT + HTTPS? Thanks

    Read the article

  • is using Hosts for resolving a sql-server more performant?

    - by Ice
    Hi, we have a legacy application which uses a access.mdb with hundreds of ODBC-connected tables on a sql-server. the access.mdb contains nothing else than these odbc-connections. Now we consider to use a virtual sql-servername for these odbc connections and resolve it in the local hosts-file with the ip-address of the real sql-server. Like this we can easy switch between a test-sql-database server and the the server for production in changing one single entry in the hosts. EVERYTHING works fine and now comes the question: Could it be that this is more performant because there is one single point on resolving the sql-server (name or ip-address)? Is there something like a network-cache / DNS-Cache? peace Ice

    Read the article

  • Make services not start automatically after reboot (as they require access to an encrypted partition)

    - by Binary255
    Hi, I use Ubuntu Server 10.04. I more or less only want the server to be accessible over SSH after a reboot. I will then login and mount the encrypted partition myself, after which I start the services which uses it. How would I go about setting something like that up? (My first idea was to have everything except /boot in an encrypted LVM, but I never got logging in through SSH and mounting the LVM to work. Initramfs was a bit too complicated for me. Otherwise I think this would have been the best solution.)

    Read the article

  • Why do I need to add my application pool identity to the IIS_IUSRS group?

    - by smcolligan
    I'm setting up a .NET v4.0 web application on a Windows 2008 R2/IIS 7.5 server that uses a domain account for the application pool identity. When I access the site, I get the following error: The current identity () does not have write access to 'C:\Windows\Microsoft.NET\Framework\v4.0.30319\Temporary ASP.NET Files' According to this: http://learn.iis.net/page.aspx/140/understanding-built-in-user-and-group-accounts-in-iis/ the identity of the worker process is added to the IIS_IUSRS group when the process starts. This seems to work fine for the existing .NET v2.0 applications I have running on the same server (I have not had to add their domain account application pool identities to IIS_IUSRS group). This does not seem to be the case for the first .NET v4.0 web application I'm setting up. Once I add the identity to the group, everything works fine. I suspect something is not configured correctly that is forcing me to do this. I would like to understand this before rolling out more sites/servers. Thanks in advance for your help...

    Read the article

  • For a particular domain, how can I cache its JSON responses locally?

    - by Chris
    I'm coding the frontend of a web app that uses XHR to grab JSON data from a 3rd party. The 3rd party service is slow and because of its API design, we need to make a LOT of API requests every time I refresh the page to test some new code. It's making the development loop painful. The requests are GETs, POSTs and PUTs even though I'm pretty sure none of the requests are changing state. I want to go to localhost for the JSON rather than to this 3rd party API - simply to make my development process faster.

    Read the article

  • Is it possible to group chrome extension processes?

    - by Shajirr
    I have a problem with Chrome - most extensions, even those which consume merely 5-10 MB of memory, each have their own process, and because of that Chrome uses a single process for all the tabs, which consume a lot more memory compared to extensions, even with --proccess-per-tab switch. This behavior seemes illogical - why do you need extensions in separate processes if you can't use your browser properly when it takes 5-10 seconds just to load a tab and freezes constantly? Is it possible somehow to limit the number of processes which can be used for extensions, maybe group them to 10 extensions per 1 process?

    Read the article

  • How to keep programs from source up to date?

    - by wizard
    I'm designing a new server setup for hosting multiple websites. (Shared hosting for my clients over at SliceHost.) I've recently moved away from the traditional LAMP setup and chosen Ubuntu, Nginx, php-fpm and mysql. I like it a lot better then my old Apache, suphp, mysql setup. It works great, provided encapsulation between sites and uses substantiallly less memory. However I have one major maintenance problem. In order to have a recent version of Nginx and in order to use php-fpm I've had to compile these programs from source. The reason I see this as a problem is that keeping track of updates, and build configurations will end up being a lot of work. For two programs (and a patch) I can handle it, but it seems like this setup would not scale with many packages and servers. Are there good ways to manage this situation? I'm sure people do this all the time.

    Read the article

  • Architecture for dashboard showing aggregated stats [on hold]

    - by soulnafein
    I'd like to know what are common architectural pattern for the following problem. Web application A has information on sales, users, responsiveness score, etc. Some of this information are computationally intensive and or have a complex business logic (e.g. responsiveness score). I'm building a separate application (B) for internal admin tasks that modifies data in web application A and report on data from web application A. For writing I'm planning to use a restful api. E.g. create a new entity, update entity, etc. In application B I'd like to show some graphs and other aggregate data for the previous 12 months. I'm planning to store the aggregate data for each month in redis. Some data should update more often, e.g every 10 minutes. I can think of 3 ways of doing this. A scheduled task in app B that connects to an api of app A that provides some aggregated data. Then app B stores it in Redis and use that to visualise pages. Cons: it makes complex calculation within a web request, requires lot's of work e.g. api server and client, storing, etc., pros: business logic still lives in app A. A scheduled task in app A that aggregates data in an non-web process and stores it directly in Redis to be accessed by app B. A scheduled task in app A that aggregates data in a non-web process and uses an api in app B to save it. I'd like to know if there is a well known architectural solution to this type of problems and if not what are other pros/cons for the solution I've suggested?

    Read the article

  • Routing DHCP traffic over the internet

    - by rmanna
    i'd like to know if it's possible for the internet to be between a DHCP server and the network it's "assigned" to? so basically, something like this: -------------- ------------- ------------- | DHCP Server | | DHCP | | Clients | | |-----Internet-----| Relay Agent |------| 192.168.0.* | | | | 192.168.0.1 | | | -------------- ------------- ------------- the behavior i'm seeing is that the DHCP server is offering 192.168.0.* IPs and sending them back to 192.168.0.1, which it can't reach. i tried masquerading the packets sent by the relay agent but that doesn't seem to work. from what i've been reading, this is normal behavior since the DHCP server uses the GIADDR as the destination address for its OFFERs, and not the actual source IP of the packets it receives from the relay agent. sooo, given that my DHCP server needs to be "on the other side of the internet" as depicted above, how can i get this working? are there settings for dhcpd to do this or is creating a VPN containing the DHCP server and the relay agent the only way? thanks!

    Read the article

  • WGet a Page that Requires Logging in

    - by Synetech inc.
    I’m trying to figure out a way to use WGET or a similar tool so that I can schedule a web page to be downloaded regularly as a sort of updating log. The problem is that the page requires that I be logged in otherwise I get a different page, generic. Further, the page does not take login information as GET parameters in the URL, it uses POST to log in on the login page and cookies to save the login information that’s read by the regular page. I’m currently using GNU Wget 1.10.2 for Windows. I’ve tried using WGET’s cookie functionality but have had mixed results, usually skewing towards it not working. Can anyone please advise on a way to accomplish this? Thanks a lot.

    Read the article

  • Subdomain on a separate server (Windows/IIS) won't load default web site?

    - by DOTang
    I've got a domain hosted under godaddy.com. I set it up so that subdomain.mysite.com points to a different server which uses Windows/IIS (say for example ip 1.1.1.1). When I go to the subdomain I get no errors, but no webpage loads. In IIS under the default website I added a bindings for subdomain.mysite.com with the ip of 1.1.1.1, but still nothing loads, just a totally blank page. I know for sure the subdomain host is working correctly because when I ping my subdomain, the correct IP shows. What am I missing to get this working?

    Read the article

  • In-House DropBox

    - by beardedlinuxgeek
    Dropbox is perfect, but as a company, no one can host anything worthwhile on servers that we don't control. So I've been tasked with coming up with a Dropbox alternative, something in house. GlusterFS is nice, but no offline access. SparkleShare uses Git which isn't great for large files. It also doesn't have windows ports. Any other options? If I were to roll out my own from scratch, what do you think the based way to go about doing this would be?

    Read the article

  • htaccess rewrite condition old site to new site with querystring

    - by Brandon Braner
    I am not even going to pretend to fully understand how htaccess rewrite conditions work. I've been working on this for a while searching and searching. I have an old Wordpress site www.old-site.com and a new site www.site.com. Wordpress uses query strings page_id=# to redirect to pages. On the old site page_id=2 went to a specific page but on the new site it goes the the home page. I need old-site/?page_id=2 to go to site.com/our-company Here is what I am trying RewriteCond %{HTTP_HOST} ^(www\.)?old-site.com$ [NC] RewriteCond %{QUERY_STRING} ^page_id=2$ RewriteRule ^(.*)$ http://www.site.com/our-company/ [R=301,L] If I take out the rewrite condition for query string it redirects all traffic from old-site.com to the company page on the new site. Where am I going wrong? I have about 15 redirects I need to do this way.

    Read the article

  • Internal SATA hard disk shows up as removable device on Windows 7

    - by neutron80k
    I am running Windows 7 on an Acer Aspire M7720 with three internal SATA drives; drive #2 and #3 are on a removable HDD rack. Drive #3 (on the second removable HDD rack) shows up as removable device in the system tray. If I take drive #3 and put it in the first rack, it shows up as an internal drive. If I put any other drive in the second removable rack, that drive is also shown as removable device. I would like to fix that so that the drive in the second removable rack also is listed as internal drive. Since this seems to be independent of the actual drive in the rack, I checked the BIOS, but that third SATA port uses the same configuration as the second rack. So far, I could not find a solution for this problem (it's really more an annoyance than a problem), any ideas are welcome.

    Read the article

  • Asus ux31e zenbook SSD non-standard / cant upgrade hdd?

    - by FstaRocka
    I bought a ux31e asus zenbook, and the drive crashed last week. I just cancelled an mSata ssd order because I fear asus has used a non standard ssd!!! what a bunch of crooks, ive lost all my respect for this company! Can anyone confirm what and why? My SSD model is xm11 128gb - it looks like a ram chip and only has two tongues with 12 and 6 pins respectively. The drive I was about to order mSata had 8 where mine had 6 - i never bothered counting the rest. This article seems to confirm! UX31 UX21 Zenbook Article "It looks like SSD uses a non-standard format"

    Read the article

  • One subdomain is not working

    - by BFTrick
    Hello there, My main domain works just fine - www.example.com and a subdomain set up by another developer works as well - sub1.example.com. But when I try to set another subdomain up I go through the process everything seems to work. The software creates the default files where the subdomain files should go. But when I try to browse there it doesn't work. My host uses Plesk to do all of the hosting stuff. What do you think the problem is? I doubt it is some sort of cache issue because I had problems on my phone which I tried after problems on the pc. Maybe for some reason Plesk needs time to set this up? I have used Cpanel before and that works instantly.

    Read the article

  • Debian Squeeze - Monitor outgoing traffic

    - by Sam W.
    I have a small webserver that running on Lighttpd 1.4 which steadily uses 250GB or less bandwidth for the past couple of months. But since May the traffic spikeed to more than triple of what it was. Nothing special was on my site to make its spike like that. When I checked with vnstat I found that 70% of the bandwidth is tx. I suspect I've been hacked and my webserver is becoming some sort of bot. ClamAV comes out with nothing and I already replaced the Joomla installation with a fresh one, early in June. But right now the traffic stayed the same. My question, how can I monitor my server and look what is transmitting all that data out? My need to be done to pinpoint what is the culprit. Can someone please point to the right way to solve this? Thank you.

    Read the article

  • openVAS - Microsoft RDP Server Private Key Information Disclosure Vulnerability - false Alarm?

    - by huebkov
    I performed a openVAS scan on a Windows Server 2008 R2 and got a report for a high threat level vulnerability called Microsoft RDP Server Private Key Information Disclosure Vulnerability. An remote attacker could perform a man-in-the-middle attack to gain access to a RDP session. Affected Software is Microsoft RDP 5.2 and below. My server uses RDP 7.1, is this alarm a false alarm? Security Advisor Pages say: Solution Status Unpatched, No remedy... References http://secunia.com/advisories/15605/ http://xforce.iss.net/xforce/xfdb/21954/ http://www.oxid.it/downloads/rdp-gbu.pdf CVE: CVE-2005-1794 BID:13818

    Read the article

  • Is it possible to use two different ErrorDocuments for different paths of a website?

    - by tapwater
    this is my first question on stackexchange and it might be a bit confusing. I currently run PmWiki (sorry, you'll have to google it, new user can only have 2 hyperlinks) at mydomain.com/pmwiki. I have a 404 page and .htaccess set up in my site's document root for 404 pages regarding anything that doesn't have to do with my wiki. By default, PmWiki handles URLs a little confusingly so I had to use this in order to get it to look like mydomain.com/pmwiki/Namespace/Page I had to create a .htaccess in /pmwiki to remove parts of the URL. PmWiki also has a custom 404 (Site/PageNotFound) page that has stopped working, now my site uses the /404.htm page. I noticed this when trying to install this "recipe" to enable case-insensitive URLs. Currently the only way to access Site/PageNotFound is by actually linking to it, and, if you read how that recipe seems to function, this is an issue. Currently mydomain.com/pmwiki/blahblah and mydomain.com/pmwiki/legitimate_namespace_but_lowercase/legitimate_lowercase_page_name both direct to mydomain.com/404.htm. I have to admit I'm very confused, and I apologize if I was unclear in any of this, but I could definitely use some help. Thanks!

    Read the article

  • Solaris 10: Identify a PID and the CPU it's running on

    - by Marcus
    I have multiple instances of a database running on a Solaris system. I'd like to prove that each database process is being handled by a different CPU. Essentially, I want to be able to do something like a ps -ef | grep <process_name> to get the PIDs and then run another command (if required) to identify the CPU... Is prstat able to do this? I'm making an assumption that as each database instance is started each one uses a different CPU. I'm not sure if I'm understanding this correctly... The reason I want to do this is because Sun hardware has slow CPU's, but lots of them. Therefore, to get the best performance out of it, I need to try and spread the load among CPU's... Thanks

    Read the article

< Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >