Search Results

Search found 5261 results on 211 pages for 'includes'.

Page 141/211 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • Proper High-End Video Editing Software for Windows?

    - by Michael Stum
    I'm wondering if anyone here has experience with Prosumer/High-End Video Editing? So far I use Adobe Premiere CS4, but it is an unstable and buggy mess sadly. This could be partially caused by the fact that my input material isn't always pure HDV/AVCHD but sometimes it can be DivX or some already pre-processed video. The one thing I liked about Adobe is though that with After Effects and Encore, they have a good overall toolset, but if it's sub-par then it's no good. Luckily it is already paid for and was worth it's money overall, so It's not a complete waste of money. But are there alternatives? Especially After Effects is quite unique, and the closest thing I found is Apple's Final Cut Studio which includes Motion and DVD Studio. The two downsides is that it requires a Mac and that it doesn't support BluRay though. Any hints for Windows? Sony Vegas might be something I'll look at, but I'm guessing I have to keep using After Effects for some serious compositing/VFX?

    Read the article

  • Wget save cookies not working

    - by TrymBeast
    I've been trying to login in the pyload through the web api, but wget is not saving the cookies and I don't understand why. I'm using the following command: wget --delete-after --keep-session-cookies --save-cookies=my_cookies.txt --post-data="username=USERNAME&password=PASSWORD" http://localhost:8000/api/login But the content of my_cookies.txt is: # HTTP cookie file. # Generated by Wget on 2012-06-23 22:31:33. # Edit at your own risk. When I run the same command but in debug mode I get the following output that includes the set cookie in the header response: DEBUG output created by Wget 1.10.2 (Red Hat modified) on linux-gnueabi. --22:31:11-- http://localhost:8000/api/login Resolving localhost... 127.0.0.1 Caching localhost => 127.0.0.1 Connecting to localhost|127.0.0.1|:8000... connected. Created socket 3. Releasing 0x000504d0 (new refcount 1). ---request begin--- POST /api/login HTTP/1.0 User-Agent: Wget/1.10.2 (Red Hat modified) Accept: */* Host: localhost:8000 Connection: Keep-Alive Content-Type: application/x-www-form-urlencoded Content-Length: 32 ---request end--- [POST data: username=USERNAME&password=PASSWORD] HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Content-Length: 34 Content-Type: application/json Cache-Control: no-cache, must-revalidate Set-cookie: beaker.session.id=405390ddc809efed54820638c95d7997; expires=Tue, 19-Jan-2038 04:14:07 GMT; Path=/ Connection: Keep-Alive Date: Sat, 23 Jun 2012 21:31:11 GMT Server: CherryPy/3.1.2 WSGI Server ---response end--- 200 OK hs->local_file is: login (not existing) Registered socket 3 for persistent reuse. TEXTHTML is on. Length: 34 [application/json] Saving to: `login' 100%[=======================================>] 34 --.-K/s in 0s 22:31:11 (1.28 MB/s) - `login' saved [34/34] Removing file due to --delete-after in main(): Removing login. Saving cookies to my_cookies.txt. Done saving cookies. Can anyone tell me what am I doing wrong? Thanks in advance!

    Read the article

  • MOSS2007 tries to use ActiveDirectory when I have configured an alternative membership provider

    - by glenatron
    I've got a MOSS site that I am trying to configure using Forms authentication and absolutely any kind of membership provider whatsoever. Thus far ActiveDirectory has proved obstructively difficult so I've just whipped up a simple stub membership provider and put it in the GAC. It's a very basic and simple provider but it works fine with an ASP.Net site, I just can't make it work with Sharepoint. On Sharepoint I get the following error when I look for StubProvider:Bob ( or anything else for that matter) from the "Policy For Web Application" people picker: Error in searching user 'StubProvider:bob' : System.ComponentModel.Win32Exception: Unable to contact the global catalog server at Microsoft.SharePoint.Utilities.SPActiveDirectoryDomain.GetDirectorySearcher() at Microsoft.SharePoint.WebControls.PeopleEditor.SearchFromGC(SPActiveDirectoryDomain domain, String strFilter, String[] rgstrProp, Int32 nTimeout, Int32 nSizeLimit, SPUserCollection spUsers, ArrayList& rgResults) at Microsoft.SharePoint.Utilities.SPUserUtility.SearchAgainstAD(String input, SPActiveDirectoryDomain domainController, SPPrincipalType scopes, SPUserCollection usersContainer, Int32 maxCount, String customQuery, TimeSpan searchTimeout, Boolean& reachMaxCount) at Microsoft.SharePoint.Utilities.SPActiveDirectoryPrincipalResolver.SearchPrincipals(String input, SPPrincipalType scopes, SPPrincipalSource sources, SPUserCollection usersContainer, Int32 maxCount, Boolean& reachMaxCount) at Microsoft.SharePoint.Utilities.SPUtility.SearchPrincipalFromResolvers(List`1 resolvers, String input, SPPrincipalType scopes, SPPrincipalSource sources, SPUserCollection usersContainer, Int32 maxCount, Boolean& reachMaxCount, Dictionary`2 usersDict). The Provider is named as Authentication Provider for the Site Collection in question. As far as I can tell this is because Sharepoint is still trying to access ActiveDirectory rather than talking to the provider I'm asking it to use. My Sharepoint Central Administration section includes this: <membership> <providers> <add name="StubProvider" type="StubMembershipProvider.Provider, StubMembershipProvider, Version=1.0.0.0, Culture=neutral, PublicKeyToken=5bd7e2498c3e1a03" /> </providers> </membership> And also: <PeoplePickerWildcards> <clear /> <add key="StubProvider" value="%" /> </PeoplePickerWildcards> Is there a clear reason why this would not be accessible from the PeoplePicker or why it is still trying to use ActiveDirectory? I've made sure I reset IIS and even restarted the server to see if either of those helped but they made no difference.

    Read the article

  • nginx: SSI working on Apache backend, but not on gunicorn backend

    - by j0nes
    I have nginx in front of an Apache server and a gunicorn server for different parts of my website. I am using the SSI module in nginx to display a snippet in every page. The websites include a snippet in this form: For static pages served by nginx everything is working fine, the same goes for the Apache-generated pages - the SSI include is evaluated and the snippet is filled. However for requests to my gunicorn backend running a Python app in Django, the SSI include does not get evaluated. Here is the relevant part of the nginx config: location /cgi-bin/script.pl { ssi on; proxy_pass http://default_backend/cgi-bin/script.pl; include sites-available/aspects/proxy-default.conf; } location /directory/ { ssi on; limit_req zone=directory nodelay burst=3; proxy_pass http://django_backend/directory/; include sites-available/aspects/proxy-default.conf; } Backends: upstream django_backend { server dynamic.mydomain.com:8000 max_fails=5 fail_timeout=10s; } upstream default_backend { server dynamic.mydomain.com:80; server dynamic2.mydomain.com:80; } proxy_default.conf: proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; What is the cause for this behaviour? How can I get SSI includes working for my pages generated on gunicorn? How can I debug this further?

    Read the article

  • Excessive PHP errors in Joomla

    - by Rodnower
    I have Joomla 2.5 installed on Windows 7 with Apache 2 and PHP 5. I have countless PHP errors in the log like the following: [01-Sep-2012 19:33:55 UTC] PHP Strict standards: Only variables should be assigned by reference in C:\ammon_dev\ammon\plugins\system\jquery\jquery.php on line 24 [01-Sep-2012 19:33:55 UTC] PHP Stack trace: [01-Sep-2012 19:33:55 UTC] PHP 1. {main}() C:\ammon_dev\ammon\administrator\index.php:0 [01-Sep-2012 19:33:55 UTC] PHP 2. JAdministrator->route() C:\ammon_dev\ammon\administrator\index.php:40 [01-Sep-2012 19:33:55 UTC] PHP 3. JApplication->triggerEvent() C:\ammon_dev\ammon\administrator\includes\application.php:106 [01-Sep-2012 19:33:55 UTC] PHP 4. JDispatcher->trigger() C:\ammon_dev\ammon\libraries\joomla\application\application.php:670 [01-Sep-2012 19:33:55 UTC] PHP 5. JEvent->update() C:\ammon_dev\ammon\libraries\joomla\event\dispatcher.php:161 [01-Sep-2012 19:33:55 UTC] PHP 6. call_user_func_array() C:\ammon_dev\ammon\libraries\joomla\event\event.php:71 [01-Sep-2012 19:33:55 UTC] PHP 7. plgSystemJquery->onAfterRoute() C:\ammon_dev\ammon\libraries\joomla\event\event.php:71 I tried disabling error logging in php.ini: error_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT Unfortunately that does not make a difference. Joomla isn’t in debug mode, and I am sure that I’m editing the correct copy of php.ini because other changes I make to it take effect. Any ideas why I am getting so many errors or how to stop it from exploding the log?

    Read the article

  • Best tool for monitoring backups, etc. and trending statstics from that data

    - by Randy Syring
    I have done some research on nagios, opennms, and zenoss but am not confident that I have found what I am looking for. The main driving force for me right now is being able to monitor backups. This includes mysql, mssql, and eventually some file system backups. We have a tool that wraps the backup process for these different systems and collects statistics. So, items like: number of databases backed up size of db backup file size of db backup file compressed time to make backup time to zip file I want to be able to A) have notifications if the jobs are not run according to schedule B) be able to set thresholds on the statistics which would trigger notifications C) I want to be able to trend and graph the statistics I am planning on sending this information to the monitoring application through an HTTP POST. Or, the monitoring application could pull it from a log file as well. However, we will have other processes with other "arbitrary" (from the monitoring system's perspective) statics that will want to monitor and trend, so flexibility is very important. The tool or tools should also be able to do general monitoring and trending of network interfaces, server load, etc. Once we get the backup monitoring in place, we will want to include those items as well. Thanks.

    Read the article

  • IIS 7.0 - responses throttled to 500ms blocks?

    - by Julia Hayward
    Scenario: ASP.NET MVC wep app sitting on my local machine (Vista Ultimate, IIS 7.0), nothing going on except one user (me) logged in and viewing an index page. The page includes 9 dynamic images drawn from the underlying DB and returned from a controller action. I have got the actual processing time for these images down to 15ms each. Turn on Firebug and watch the page load. What I see is 9 requests for images firing off together – no surprise – but four come back to me almost immediately; two more after 0.5s; another after 1s; then at 1.5s and 2s. Logging on the server side suggests the individual responses are still only taking 15ms. So it appears IIS is queueing things up into 500ms chunks. (Repeating the experiment produces different results, but each time the images return in similar blocks – you might get three in the first group, then three at 0.5s, two at 1s etc, for example – and it’s always at 500ms intervals, not anything else.) It’s also repeatable cross-browser, and it’s not repeatable with other forms of content. I haven't found any particular mention of this problem out there, so I'm sort of assuming it's not an IIS bug, so is it: i) IIS on desktop OSs deliberately does it, to make you use server OSs in production? ii) There is some magical setting that has eluded me for as long as I’ve known IIS? iii) Something peculiar to MVC or SQL Server 2008? or something else?

    Read the article

  • Why Does DreamWeaver CS5 Discriminate between File Extensions, Even After Modding Mime Types!?

    - by Sam
    Hi folks, Even After I forced DreamWeaver CS5 to allow opening of .ast extensions as a MIME type of php5, which DreamWeaver now opens and colors correctly as described here, I still have trouble figuring out why it still discriminates between the two file extensions! Symptoms: External Files & Design View I have a file foo.php which php includes other files (e.g. the php-combined css.php and js.php). Now, when opening foo.php all functions work perfectly: the external (included) php files are all recognised correctly. However, when I change foo.php foo.ast, and open it again, It does not recognise the files extensions anymore in the top bar. Also, I lose the Design / Live View functionality.** When I change foo.ast to foo.php, all works again! Anyone any clues of why there remains a a difference between one and other extension? Note1: I have added the .ast extension to these four files, next to .php: 1 C:\Users\Sam\AppData\Local\VirtualStore\Program Files (x86)\Adobe\Adobe Dreamweaver CS5\configuration\DocumentTypes\MMDocumentTypes.xml 2 C:\Program Files (x86)\Adobe\Adobe Dreamweaver CS5\configuration\DocumentTypes\MMDocumentTypes.xml 3 C:\Users\Sam\AppData\Roaming\Adobe\Dreamweaver CS5\en_US\Configuration\Extensions.txt 4 C:\Program Files (x86)\Adobe\Adobe Dreamweaver CS5\configuration\Extensions.txt Note2: sometimes, even .php files do not want to show in design view or live view. Could this be caused by a corrupted installation?

    Read the article

  • Is it possible to boot Windows 7 from when you're harddrive's partition with two OSes?

    - by Muhammad
    I have a PC with a hard drive that's partition into home directories for Windows 7 and Ubuntu. I primarily use Windows 7 and occasionally (once a week) use Ubuntu. When I boot up my computer, I usually get taken to a boot menu that includes about 5 different options: 3 are for Ubuntu's configurations, one's for swap, and the forth is for Windows 7. Then after I select Windows 7 or Ubuntu from this menu, I get taken to another menu that again asks me for Windows 7 or Ubuntu. This time, there's only 2 options, Windows 7 and Ubuntu. [Side note: out of experience I realized most boot menus are timed and so are these.] So if I ever turn my computer on without actually sitting in front of it for a few minutes, it boots into Ubuntu. I'm trying to figure out what I need to do so I can first get rid of the 2 boot menus. And if possible, I'm looking for help changing my boot options where I can load up Windows 7 (even with the boot menu wait of about 30 seconds). My harddrive's partition's laid out like this: Windows 7 (C partition) Multimedia (D partition, I just use this for backup/non-OS stuff) Ubuntu (home directory) Swap Is there any other information I need to provide?

    Read the article

  • 553-Message filtered - HELO Name issue?

    - by g18c
    I am having major issues sending from my SBS2011 machine to Message labs server-13.tower-134.messagelabs.com #553-Message filtered. Refer to the Troubleshooting page at 553-http://www.symanteccloud.com/troubleshooting for more 553 information. (#5.7.1) ## I have changed the IP and hostnames from the below. I am not on any IP or domain blacklists. I have setup SPF (which includes mailchimp servers): v=spf1 mx a ip4:95.74.157.22/32 a:remote.mydomain.com include:servers.mcsv.net ~all I am sure i have setup my HELO names correctly under the Exchange Management console, sending a test email from the SBS server and looking at the header shows the following: X-Orig-To: [email protected] X-Originating-Ip: [95.74.157.22] Received: from [95.74.157.22] ([95.74.157.22:52194] helo=remote.mydomain.com) by smtp50.gate.ord1a.rsapps.net (envelope-from <[email protected]>) (ecelerity 2.2.3.49 r(42060/42061)) with ESMTP id 11/90-10010-E529C835; Mon, 02 Jun 2014 11:04:09 -0400 Received: from MYSBSSVR.mydomain.local ([fe80::3159:95a6:23f:1bef]) by MYSBSSVR.mydomain.local ([fe80::3159:95a6:23f:1bef%10]) with mapi id 14.01.0438.000; Mon, 2 Jun 2014 19:03:56 +0400 Is is the main helo name there OK and do i need to worry about the second Received block where the MYSBSVR.mydomain.local is mentioned? I have asked the ISP to set the reverse DNS for my IP to remote.mydomain.com but they have instead put remote.MYDOMAIN.com - would this case cause HELO lookups to classify this as not matching? Anything else I can do to find out why i am being filtered?

    Read the article

  • How do I register a service with Bonjour?

    - by Roman
    I am trying to start to use Bonjour. Here I found a manual how to register a service with Bonjour. The following is written there: The network services architecture in Bonjour includes an easy-to-use mechanism for publishing, discovering, and using IP-based services. Well let's see how to register a service. In the very beginning it is written: To publish a service, an application or device must register the service with a Multicast DNS responder But how?!?! First of all I do not know what is the Multicast DNS responder. Second, it is not written how do I do it. Where and what should I type? Should I use command line? Should I use some programming languages? What exactly should I type... Is there an easy way to start to use Bonjour? Well, it was emphasized several times how easy it should be to use it. But I cannot start to use it for several day. So, can anybody, pleas, help me with that?

    Read the article

  • Why can't I renew my dynamic IP address?

    - by qwerty
    So, I'm going to explain this from the start. I've started a project with a friend of mine which includes a webspider, that crawls through all pages on a site and stores them in a DB. Since I've never done this before, I didn't think about the amount of requests I was actually sending to the site, and after a day or two I finally got my IP blocked. I need to be able to visit that site as it's very important to me. Not only for my project, but for other reasons too. (and if I'm able to renew my IP I'm going to set a delay on the crawler so I don't get blocked & DDOS the site) I have a dynamic IP address, at least that's what my router settings say. I've tried ipconfig /flushdns, ipconfig /release, restart computer. No result. I end up with the same IP address. I've also tried renewing it from the router, however, I think it uses the same method which isn't working. Is it possible that site has blocked my mac address? Can a site even access my mac address?

    Read the article

  • Windows and file system abstraction - how much does it matter where something comes from?

    - by deceze
    I have come across the following phenomenon and would like to know how leaky Windows' file system abstraction is or if there's something else involved. I partitioned the hard disk of my MacBook Pro and installed Windows 7 (64 bit). The Bootcamp driver package includes file system drivers (right term?) that enable Windows to access the Mac OS HFS+ partition. AFAIK it's a read-only access, but it works. Now, I have some disk images of stuff I usually install, so I grabbed a copy of Daemon Tools to mount them. When I mount an image saved on the HFS+ partition, about two out of three installers on these disks (usually InstallShield) crash with all sorts of weird errors. Most are just gibberish that lead to all sorts of non-solutions on Google, one was "This application is not the right type for your computer, check if you need 32 or 64 bit versions." When moving the image files to another Windows 7 computer on the network and mounting them from the network share, they work fine. My question now is, why do applications behave differently depending on whether the read-only image file, which should be abstracted away through the read-only virtual Daemon Tools drive, is located on a read-only HFS+ partition or on a Windows network share? And I'll just roll this into the question as well since I was wondering: Does the file system of a network share matter? Does the client system need to understand the file system of the share host or is that abstracted away in SMB?

    Read the article

  • Upgrading MySQL Connector/Net

    - by Todd Grover
    I am trying to publish a website with our hosting provider. I am getting error due to the fact that they only allow a medium trust and the MySQL Connector/Net that I am using requires reflection to work. Unfortunately, reflection is not allowed in a medium trust. After some research I found out that the newest version of the MySQL Connector/Net may solve this problem. Connector/Net 6.6 includes enhancements to partial trust support to allow hosting services to deploy applications without installing the Connector/Net library in the GAC. I am thinking that will solve my problem. So, I unistalled MySQL Connector/Net 6.4.4 and I installed MySQL Connector/Net 6.6.4. When I run the application in Visual Studio 2010 I get the error: ProviderIncompatibleException was unhandled by user code The message is An error occurred while getting provider information from the database. This can be caused by Entity Framework using an incorrect connection string. Check the inner exceptions for details and ensure that the connection string is correct. InnerException is The provider did not return a ProviderManifestToken string. Everything works fine when I have Connector/Net 6.4.4 installed. I can access the database and perform Read/Write/Delete action against it. I have a reference to the following in the project: MySql.Data MySql.Data.Entity MySql.Web My connection string in Web.config <connectionStrings> <add name="AESSmartEntities" connectionString="server=ec2-xxx-xx-xxx-xx.compute-1.amazonaws.com; user=root; database=nunya; port=3306; password=xxxxxxx;" providerName="MySql.Data.MySqlClient" /> </connectionStrings> What might I be doing wrong? Do I need any additional setting(s) to work with version 6.6.4 that wasn't required in the older version 6.4.4?

    Read the article

  • Possible to host CentOS netinstall files on a local HTTP/FTP?

    - by garlicman
    I'm running XenServer on an Dell R610 and am running into a catch-22. During install from DVD, CentOS can't find the DVD package catalogue. It's a reported error for some, XenServer + CentOS6 + DVD install in some hardware configurations = failed install. Yes, I checked the MD5 and let the disc test pass. In every reported case, the netinstall was the solution. The issue is my net access is required to go through a web proxy that prompts before you can download a file. This naturally breaks any download automation. I've been waiting on our IT to put in an exception rule to allow my lab to bypass the prompt, but it's been over 3 weeks now and they don't seem responsive. (I've been working on this a day or two a week) I want to try and host the netinstall files local in my Xen network. Right now I only have a bunch of Windows based VMs, CentOS won't install so I don't have any Linux tools. I had tried simply hosting all the DVD contents off one of the Windows servers using Mongoose. (I didn't want to setup IIS) I copied them to a hosted sub-directory similar to all the mirrors out there (e.g. http:///centos/6.2/os/i386/) with no auth or anything. Then in the netinstall I correctly pointed to it. I now realize just copying the DVD files over won't work. The repodata will point to a local device, not the site I'm hosting. (e.g. the DVD repodata includes xml that points to where the packages are) Clearly I'm hosting them over HTTP, not from a DVD. Is there an easy way to sort this out? I'm just trying to install CentOS6 on Xen. If there's a turnkey downloadable Xen image with CentOS 6.2 on it, or a downloadable repo image, I'll take that too! Thank you in advance!

    Read the article

  • What kernel modules are required for wi-fi to work?

    - by Leonid Shevtsov
    My custom-built 2.6.32 kernel cannot connect to any WPA-protected network. The kernel includes (probably?) everything that should be needed for wifi, including IPv4 network support (IPv6 is disabled), the ath5k wireless driver (which is used in the generic Ubuntu 2.6.31 kernel) and all crypto APIs. The card is being detected, however, iwlist scan returns wlan0 Failed to read scan data : Network is down and network-manager log says <info> (wlan0): driver supports SSID scans (scan_capa 0x01). <info> (wlan0): new 802.11 WiFi device (driver: 'ath5k') <info> (wlan0): exported as /org/freedesktop/NetworkManager/Devices/1 <info> (wlan0): now managed <info> (wlan0): device state change: 1 -> 2 (reason 2) <info> (wlan0): bringing up device. <info> (wlan0): preparing device. <info> (wlan0): deactivating device (reason: 2). supplicant_interface_acquire: assertion `mgr_state == NM_SUPPLICANT_MANAGER_STATE_IDLE' failed <info> modem-manager is now available <WARN> default_adapter_cb(): bluez error getting default adapter: The name org.bluez was not provided by any .service files <info> Trying to start the supplicant... <info> (wlan0): supplicant manager state: down -> idle <info> (wlan0): device state change: 2 -> 3 (reason 0) <WARN> nm_supplicant_interface_add_cb(): Unexpected supplicant error getting interface: wpa_supplicant couldn't grab this interface. The exact same configuration works with the generic kernel. Is anything except wifi and crypto api needed for wi-fi to work?

    Read the article

  • WebHost Manager - Apache's VHost isn't matching the DNS entry

    - by Trans
    I've used CPanel's WebHost Manager to create a new host on my VPS. I then used my HOSTS file to point fake.com to the relevant IP address. The problem I'm having now is, Apache isn't recognizing the VHost,or something, as it's just loading the default entry and 404'ing every document I try to GET. Here's the VHost entry NameVirtualHost 0.0.0.209:80 NameVirtualHost 0.0.0.211:80 <VirtualHost 0.0.0.209:80> ServerName fake.com ServerAlias www.fake.com DocumentRoot /home/fakecom/public_html ServerAdmin [email protected] ## User fakecom # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup fakecom fakecom </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup fakecom fakecom </IfModule> CustomLog /usr/local/apache/domlogs/fake.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." CustomLog /usr/local/apache/domlogs/fake.com combined Options -ExecCGI -Includes RemoveHandler cgi-script .cgi .pl .plx .ppl .perl ScriptAlias /cgi-bin/ /home/fakecom/public_html/cgi-bin/ </VirtualHost> I've Google'd this profusely and all that's being returned is 'DNS errors. Wait for it to propagate'. That's obviously not my problem, since I'm using HOSTS. What else could be causing this? :/ EDIT: Forgot to mention. http://fake.com/~fakecom/test.html loads just fine. So the fake.com is pointing to the right IP.

    Read the article

  • Can I use a Windows Server 2003 Domain Controller but my home router for DNS?

    - by NetworkingWannabie
    Hi All Probably easiest to start with a description of my current setup, which works (oh, and this is a home setup not an office or anything): I have an ADSL modem with a static IP address (192.168.128.1), and its DHCP capability is disabled. I have a permanently powered up Windows Server 2003 machine with a fixed IP (192.168.128.2) which provides my domain controller, dhcp, and dns. The default gateway for everything is my ADSL modem everything is setup to use the WS2003 machine as the primary DNS with the ADSL modem as Secondary DNS just in case the server goes down (everything includes the server itself). Lastly, just in case it's relevant, I have my DHCP leases set to infinite (or whatever the right term is). Everything is pretty hunky dory. Except, that is, for the fact that my server is ALWAYS on, and it isn't always used, so I'm burning juice that I don't need to - my server burns around 120W which isn't immense but isn't irrelevant either, so I'd like to put it into a stand-by state when it isn't being used (the more standby the better) and then get the clients to wake it up. Am I correct in assuming that this won't work at the moment - A given client would need an IP address to wake the machine up, and it needs to machine to be awake to get an IP - catch 22? Assuming I'm correct, can I move to using my router (which is always on) for DHCP? What impact will this have on DC and DNS? Alternatively, does anyone have a better way for me to achieve this? Can I get the server to wake up when it sees clients look for a DHCP server, etc? Wow, that came out longer than expected! Thanks for your help.

    Read the article

  • Can I run Excel 2010 on a server?

    - by Glen Little
    This question is not about a person using Excel on a computer that happens to have an Windows Server OS. And it is not about using any Sharepoint services features! The question is about automated processes that use code (Office Automation) to open Excel files, manipulate them, run calculations, read data, save copies of the file and close the files... all in code. In previous versions of Excel the licensing agreement prevented use on a public server, notes from Microsoft warned about the problems trying to use Office Automation in a server environment, and we were warned that Excel was single threaded and not designed for use on a server. Most of the articles about this were written before Office 2010. But now, Excel 2010 is designed to work on a High Performance Computing server using HPC Services for Excel. One HPC document mentions "Windows HPC Server 2008 R2 includes a comprehensive pop-up manager that can handle occasional dialog boxes and pop-up messages". So my question is... is it now "safe" to run code that automates Excel 2010 on a "normal" server without using the HPC services? If not, can the HPC Services for Excel work on a single server? I don't need the high performance, distributed computing, aspect of HPC Services for Excel... just the ability to run Excel on a server. Can that now be done? Thanks, Glen

    Read the article

  • New build won't boot: some fans turning, no beep or video

    - by Dave
    When my new system is powered up, the case fan and power supply fans turn fine. The CPU fan twitches, but never gets going. Although I've heard that with AMDs and Gigabyte motherboards that is not necessary a problem. Hard drive is spinning. However, there is absolutely no indication that anything else is happening. The motherboard, as far as I can tell, does not have an internal speaker, but I harvested one from another machine and plugged it in and still no beeps at all. The monitor screen stays black, on both the integrated VGA and DVI. This is a brand new build, and has never successfully booted. My parts are: AMD Athlon II X2 245 Regor 2.9GHz Socket AM3 65W Dual-Core Processor Model ADX245OCGQBOX - includes CPU cooler) GIGABYTE GA-MA785GPMT-UD2H AM3 AMD 785G HDMI Micro ATX AMD Motherboard - Retail G.SKILL Ripjaws Series 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10666) Desktop Memory Model F3-10666CL8D-4GBRM - Retail CORSAIR CMPSU-400CX 400W ATX12V V2.2 80 PLUS Certified Compatible with Core i7 Power Supply - Retail SAMSUNG Spinpoint F3 HD502HJ 500GB 7200 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive -Bare Drive COOLER MASTER Elite 341 RC-341C-KKN1-GP Black Steel MicroATX Mid Tower Computer Case - Retail I also have a DVD burner, but it acts the same whether that is plugged in or not. I'm using the on board video. What I've tried so far: I've switched power supplies, with no difference. I've tried different monitors (of which all are working on other machines) with no difference. I have tried putting it one memory module at a time, with no difference. I have tried the absolute minimum I can think of (power supply into motherboard, power button ONLY plugged into front panel, CPU fan plugged in), with no difference. I appreciate any ideas anyone might have. Do I need to RMA the motherboard? This is my first build, so there might be something obvious. I was very careful in assembly with static; I'm confident nothing was zapped during assembly.

    Read the article

  • Can't ping guest OS from Windows XP SP3 host running VIC.

    - by Vittal
    Hi, I am running VMware ESX Server 3i Version 3.5.0 and accessing this server using VMware Infrastructure Client Version 2.5.0 on a Windows XP SP3 machine. I have enabled the Microsoft TCP/IP version 6 stack and assigned an IPv6 address (using the netsh command) to the network adapter. The guest OS'es running on ESX Server (includes Win7, W2K8, WinXP) also have IPv6 addresses enabled on their adapters. The adapters are configured to be in VM Network (Bridged mode) and hence have connectivity to the Internet. The VMs are able to ping each other using IPv6 addresses and are also able to ping a physical Win7 machine using IPv6 addresses. However, the Windows XP SP3 machine on which the Client is running is not able to ping any hosts running on ESX Server while the VMs are able to ping this host. Whenever I try to ping from WXP box I get the "Invalid source route specified." error. The WinXP machine is not able to ping the Win7 physical machine too (the same error as above is thrown). Can someone help me understand why I am receiving this error and what I need to do to resolve this error? Thanks, Vittal

    Read the article

  • Save and restore multiple layers within a Photoshop action that flattens

    - by SuitCase
    I'm editing comic pages with layers - "background", "foreground", "lineart" and "over lineart". I have a Photoshop action that includes a Mode-Bitmap command, which requires the image to be flattened. I need this part of the action because I use the Halftone Screen method of reducing the greyscale image to bitmap on the "background" layer, creating a certain effect. I am pretty sure there is no filter or anything else that gives the same effect. After the mode is changed to bitmap, my action changes things back to greyscale for further changes. This poses a problem. I only want to do the bitmap mode change on the background layer, and after I do the change I want to restore the layer structure as it was - with the foreground, lineart and over lineart layers back above the now-halftoned background. My current method of saving these layers and restoring them is clumsy. My action is able to automatically save the "foreground" layer by selecting it, cutting it, then pasting it back in after the mode changing is over. But, for the "ink" and "over ink" layers, I have to manually cut these layers, paste them into a new document, and later re-cut and re-paste after running my action. This is so clunky! What I would like to know is if it's possible to set aside my layers in an automated way, and then bring them back in, also in an automated way. An ugly (but functional) solution would be to replicate my actions of creating new documents and pasting them temporarily there, but I don't think Photoshop allows you to do things outside of your current document with an action. It seems to me that the only way to do what I want is to use the "hack" of incorporating the clipboard into the action as a clever hack, but that leaves me stuck as I have two more layers that can't fit onto that same clipboard. Help or suggestions would be appreciated. I can keep on doing it manually, but to have a comprehensive action would save me a ton of time.

    Read the article

  • Why does my DSL modem now need a reboot each time for my laptop to connect?

    - by msorens
    I have a rather peculiar home networking issue. For sometime my home network was purring along fine. I could turn on either of my laptops and they would quickly find and connect to my DSL modem (and thence the internet). Several days ago I unplugged my DSL modem for the first time in months. Upon turning it back on and waiting for the boot to finish, the lights on the panel indicated the DSL modem was fully operational, just as before. But that's not what happened. Not at all. Now when I turn on my Win7 laptop, the network icon in my system tray shows a small starburst; hovering over it the tooltip states "Not connected; connections are available". Clicking it lists several nearby networks including my own network showing a strong signal. If I click to connect, it attempts a connection but then I get a dialog stating "Windows was unable to connect to MyNet.". Turning off wireless on my laptop and turning it back on yields no difference. Running the network troubleshooter (which includes doing a repair on the network connection) yields no difference. The only remedy is to reboot the DSL modem (i.e. unplug it, wait a few seconds, then plug it back in). As soon as it goes online my laptop finds it and connects properly. To add one more twist to the story, this happened to me once before, several months ago. After a couple weeks, the situation resolved itself(!). Everything started working properly again, due to nothing I did. Final note: this problem only affects the wireless connection to the DSL modem. My desktop computer, connected via hardline to the DSL modem, connects fine when I turn it on. Any thoughts on why this is happening or how to fix it?

    Read the article

  • What kernel modules are required for wi-fi to work?

    - by Leonid Shevtsov
    My custom-built 2.6.32 kernel cannot connect to any WPA-protected network. The kernel includes (probably?) everything that should be needed for wifi, including IPv4 network support (IPv6 is disabled), the ath5k wireless driver (which is used in the generic Ubuntu 2.6.31 kernel) and all crypto APIs. The card is being detected, however, iwlist scan returns wlan0 Failed to read scan data : Network is down and network-manager log says <info> (wlan0): driver supports SSID scans (scan_capa 0x01). <info> (wlan0): new 802.11 WiFi device (driver: 'ath5k') <info> (wlan0): exported as /org/freedesktop/NetworkManager/Devices/1 <info> (wlan0): now managed <info> (wlan0): device state change: 1 -> 2 (reason 2) <info> (wlan0): bringing up device. <info> (wlan0): preparing device. <info> (wlan0): deactivating device (reason: 2). supplicant_interface_acquire: assertion `mgr_state == NM_SUPPLICANT_MANAGER_STATE_IDLE' failed <info> modem-manager is now available <WARN> default_adapter_cb(): bluez error getting default adapter: The name org.bluez was not provided by any .service files <info> Trying to start the supplicant... <info> (wlan0): supplicant manager state: down -> idle <info> (wlan0): device state change: 2 -> 3 (reason 0) <WARN> nm_supplicant_interface_add_cb(): Unexpected supplicant error getting interface: wpa_supplicant couldn't grab this interface. The exact same configuration works with the generic kernel. Is anything except wifi and crypto api needed for wi-fi to work?

    Read the article

  • Cant access folder on server- Permission denied

    - by Michal Korzeniowski
    I am running a vps with ubuntu 11.04. After a clean Modx install I've tried to access http://www.encepence.pl/manager and I've got a permission denied by my server. the thing is that I can easily access any other folder under that domain and modify this folder(manager) content via ftp. I’ve tried modifying virtual host with that <Directory /var/www/blackflow/data/www/encepence.pl/manager/> Options Indexes FollowSymLinks ExecCGI AllowOverride All Order allow,deny Allow from all </Directory> But it didn't work. <Directory /var/www/blackflow/data/www/encepence.pl> Options -ExecCGI -Includes php_admin_value open_basedir "/var/www/blackflow/data:." php_admin_flag engine on </Directory> <VirtualHost 192.166.219.34:80 > ServerName encepence.pl CustomLog /var/www/httpd-logs/encepence.pl.access.log combined DocumentRoot /var/www/blackflow/data/www/encepence.pl ErrorLog /var/www/httpd-logs/encepence.pl.error.log ServerAdmin [email protected] ServerAlias www.encepence.pl SuexecUserGroup blackflow blackflow AddType application/x-httpd-php .php .php3 .php4 .php5 .phtml AddType application/x-httpd-php-source .phps php_admin_value open_basedir "/var/www/blackflow/data:." php_admin_value sendmail_path "/usr/sbin/sendmail -t -i -f [email protected]" php_admin_value upload_tmp_dir "/var/www/blackflow/data/mod-tmp" php_admin_value session.save_path "/var/www/blackflow/data/mod-tmp" VirtualDocumentRoot /var/www/blackflow/data/www/%0 </VirtualHost> Any ideas on what might have gone wrong?

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >