Search Results

Search found 18598 results on 744 pages for 'result'.

Page 378/744 | < Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >

  • Open a custom remote powershell remotely

    - by Yann
    I have 2 computers. On the computer A, I have a custom module written in C# for powershell 3.0 and installed via a MSI. I also have a shortcut that open powershell with the module already loaded. I can just double click on my shortcut and run my command Do-Something on this computer without any problem, like the Exchange Server powershell. But now I would like to do it from a remote session on computer B in C#. So my question is, how can I open a remote powershell session to computer A with my module already loaded and the shell configured so I can just run my command and obtain the same result than if I run it on computer A?

    Read the article

  • On Windows 7, how do I fix my cmd.exe icon and remove cruft from the jumplist

    - by sb3700
    Hi. When installing drivers for my Gigabyte motherboard, I installed a "Games" link which ran from a batch file. This was pinned to the taskbar by default. As a result, it changed the icon for cmd.exe to the icon for Games. I uninstalled the Games and it got rid of the icon leaving it with a white rectangle thing (I can post screenshots on request). There is also a link on the jumplist to open Games, which just opens a cmd window. I've tried rebuilding my icon cache as per Changing Windows 7 pinned taskbar icons, but this only removed the white rectangle icon, leaving me with no real icon. c:\windows\system32\cmd.exe still has the appropriate icon in explorer, just not on the taskbar. Any ideas on how to fix this annoyance?

    Read the article

  • How to kill all screens that has been up longer then 3 weeks?

    - by Darkmage
    Im creating a script that i am executing every night at 03.00 that will kill all screens that has been running longer than 3 weeks. anyone done anything similar that can help? If you got a script or suggestion to a better method please help by posting :) I was thinking maybe somthing like this. First do a dump to textfile ps -U username -ef | grep SCREEN dump.txt then do a loop running through all lines of dump.txt with a regex and putting pid of the prosseses with STIME 3weeksago in a array. then do a kill loop on the array result.

    Read the article

  • Why is my iPhone WIFI so slow at home?

    - by John Fouhy
    When my iPhone is connected to my home wireless network, the internet is unusably slow. I installed the speedtest.net application; here are some results from tonight: Down: 0.0kB/s, Up: 0.0kB/s, ping: 2230ms Down: 2.5kB/s, Up: 40.5kB/s, ping: 2182ms Down: 0.0kB/s, Up: 20.0kB/s, ping: 197ms For comparison, here is the result from my iMac to the same server, which is on the same wireless network (and has no wired connection): Down: 139kB/s, Up: 53.8kB/s, ping: 182ms Neither my iMac nor the Dell laptop also on the network have experienced the wifi problems I get with my iPhone. On the other hand, I tried browsing a website on the wireless network at work with no problems. EDIT: SpeedTest at work gives me 156kB/s down. EDIT2: Girlfriend (owner of the Dell) reports actually the internet is sometimes very slow. Perhaps there is more going on. No problems with my iMac. My router is a ASUS WL-500g Premium V2 running OpenWrt Kamikaze with X-Wrt Extensions 8.09.

    Read the article

  • Prevent machine in a LAN from receiving a remote shutdown

    - by WebDevHobo
    I'm probably just overreacting, but I recently came across a LAN-scanner that showed me the option "remote shutdown", for all found computers on the scanned network. Now, how exactly does this work? If I send such a message, will the shutdown happen no matter what, or is it required to have the password/user-name of the user of that other computer. Mostly I'm wondering: can this be done to me and how do I prevent it? EDIT: what's more, I had the scanner check for shares. The result being this: Double clicking the links opens them in explorer, basically meaning my entire C and F drive(only 2 HD's I have) are completely exposed to anyone in my LAN. Or can I open these because it's my own machine?

    Read the article

  • Where does the YouTube video files stored on system nowadays?

    - by souravc
    When I open a youtube video with firefox I could not find any video file inside ~/.mozilla/firefox/<some_string>.default/Cache/. I also tried with google-chrome like, ps ax | grep flash ls -l /proc/[*PID*]/fd | grep Flash But again /proc/[*PID*]/fd does not have any video like file. ls -l /proc/[*PID*]/fd have some results like lrwx------ 1 root root 64 Nov 9 12:18 22 -> /run/shm/.com.google.Chrome.eOsHNu (deleted) lrwx------ 1 root root 64 Nov 9 12:18 23 -> /run/shm/.com.google.Chrome.p8h6BL (deleted) Result of ls -l /proc/[*PID*]/fd | grep Flash for some videos from other site is like lrwx------ 1 root root 64 Nov 9 12:35 26 -> /home/username/.config/google-chrome/Default/Pepper Data/Shockwave Flash/.com.google.Chrome.QMzxP8 (deleted) But could not be copied. Then what are the places where firefox or google-chrome stores the streaming videos? And is it possible to recover(copy) a video from there to watch them offline? P.S I have other ways(downloaders) to have streaming videos, but my question is very specific.

    Read the article

  • Solving &ldquo;XmlSchemaException: The global element '&lt;elementName&gt;' has already been declare

    - by ChrisD
    I recently encountered this error when I attempted to consume a new hosted WCF service.  The service used the Request/Response model and had been properly decorated.  The response and request objects were marked as DataContracts and had a specified namespace.   My WCF service interface was marked as a ServiceContract and shared the namespace attribute value.   Everything should have been fine, right? [ServiceContract(Namespace = "http://schemas.myclient.com/09/12")] public interface IProductActivationService { [OperationContract] ActivateSoftwareResponse ActivateSoftware(ActivateSoftwareRequest request); } well, not exactly.  Apparently the WSDL generator was having an issue: System.Xml.Schema.XmlSchemaException: The global element 'http://schemas.myclient.com/09/12:ActivateSoftwareResponse' has already been declared. After digging I’ve found the problem; the WSDL generator has some reserved suffixes for its entities, including Response, Request, Solicit (see http://msdn.microsoft.com/en-us/library/ms731045.aspx).  The error message is actually the result of a naming conflict.  The WSDL generator uses the namespace of the service to build its reserved types.  The service contract and data contract share a namespace, which coupled with the response/request name suffixes I was using in my class names, resulted in the SchemaException. The Fix: Two options: Rename my data contract entities to use a non-reserved keyword suffix (i.e.  change ActivateSoftwareResponse to ActivateSoftwareResp). or; Change the namespace of the data contracts to differ from the service contract namespace. I chose option 2 and changed all my data contracts to use a “http://schemas.myclient.com/09/12/data” namespace value. This avoided a name collision and I was able to produce my WSDL and consume my service.

    Read the article

  • Ubuntu 12.04 - default Radeon driver does not work at all

    - by mumble
    I've recently upgraded to 12.04 LTS and I have an ATI Radeon HD5670. I've heard that the open source 'Radeon' driver is used by default. However, it wasn't showing anything for me. What I did was I added the 'nomodeset' option to boot up and install fglrx. But it didn't work well for me as it introduced a lot of problems (freezes/glitches). So I removed/purged fglrx and am planning to use the open source drivers instead. So my question is this: Why is my default Radeon driver not working? Is anyone having a similar issue? I've also tried using the ubuntu-x-swat driver by running the ff commands: sudo add-apt-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update But the result was the same as the Radeon driver. Nothing shows up on system boot. Any ideas? Thanks in advance! Update Running lspci -nn | grep VGA gives me the following: 02:00.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI Redwood [Radeon HD 5670] [1002:68d8]

    Read the article

  • mail refused by port 25

    - by shantanuo
    When I try to send a mail from my Linux (CentOS) server, the exit status is 0, but the mail never reaches it's destination. The /var/log/maillog file has an entry something like this... Mar 18 06:33:01 app11 postfix/qmgr[22454]: F18FD9F6074: to=<[email protected]>, relay=none, delay=0.01, delays=0/0/0/0, dsn=4.4.1, status=deferred (delivery temporarily suspended: connect to alt4.gmail-smtp-in.l.google.com[74.125.45.27]: Connection refused) Am I blocked by google? I tried to send a mail to some other mail server and got the similar result. Mar 18 06:33:01 app1 postfix/smtp[15460]: connect to acsinet11.xxx.com[111.222.333.444]: Connection refused (port 25) How do I correct this problem?

    Read the article

  • GPU based procedual terrain borders?

    - by OnePie
    I'm working on a game that preferibly should feature a combination of designed and procedually generated terrain where the designer specifies in somewhat detailed terms what type of terrain a given area will have (grasslands, forest etc...) and then a precedual algorithm takes care of the rest. I'm not talking about minecraft style biomoes, but rather the game map for a strategy game. Each 'area' will not take up that much of the screen, and thus be more akin to a tile whose texture is procedually generated. While procedually generating terrain textures on the GPU are not that difficult, the hard part is making the borders between them look good. Currently, the 'tiles' are large enough to be visible (due to memory constraints mainly, we are talking planetary sized textures for a game taking place in space and on a continental ground view with seamless transitions between them) and creating good borders between them with an algorithm that is fast enough to be useful has proven difficult. Sampling the n-surrounding pixels and using the combiened result did not yield very good borders and was fairly slow on the GPU to boot (ca 12ms for me, that is without any lighning or shading and with very simple terrain texture shaders). So are there any practical known methods to solve this problem?

    Read the article

  • Set ReturnPath globally in Postfix

    - by Gaia
    I have Magento using Sendmail and Wordpress using PHPmailer to send webapp-generated mail. Occasionally, someone will enter their email address incorrectly and the mail (let's say, a purchase receipt) will bounce back to the return-path specified by the script. I dont want to set the return path for each vhost, especially because it is not easily done. Ideally, WP would use the address of the blog admin and Magento would use one of the numerous email fields specified, but they default to using username@machinename (in my case, username is the system user and machinename is a FQDN, but it is not the same as the actual vhost FQDN). The result is that bounced mail returns to the server and, since the server is used only for outbound SMTP, the messages sit there, undelivered and worse, unread. I'm Postfix 2.6.6 on CentOS 6.3, is it possible to globally force a specific returnpath for all messages sent via PHP on the server?

    Read the article

  • How to Reap Anticipated ROI in Large-Scale Capital Projects

    - by Sylvie MacKenzie, PMP
    Only a small fraction of companies in asset-intensive industries reliably achieve expected ROI for major capital projects 90 percent of the time, according to a new industry study. In addition, 12 percent of companies see expected ROIs in less than half of their capital projects. The problem: no matter how sophisticated and far-reaching the planning processes are, many organizations struggle to manage risks or reap the expected value from major capital investments. The data is part of the larger survey of companies in oil and gas, mining and metals, chemicals, and utilities industries. The results appear in Prepare for the Unexpected: Investment Planning in Asset-Intensive Industries, a comprehensive new report sponsored by Oracle and developed by the Economist Intelligence Unit. Analysts say the shortcomings in large-scale, long-duration capital-investments projects often stem from immature capital-planning processes. The poor decisions that result can lead to significant financial losses and disappointing project benefits, which are particularly harmful to organizations during economic downturns. The report highlights three other important findings. Teaming the right data and people doesn’t guarantee that ROI goals will be achieved. Despite involving cross-functional teams and looking at all the pertinent data, executives are still failing to identify risks and deliver bottom-line results on capital projects. Effective processes are the missing link. Project-planning processes are weakest when it comes to risk management and predicting costs and ROI. Organizations participating in the study said they fail to achieve expected ROI because they regularly experience unexpected events that derail schedules and inflate budgets. But executives believe that using more-robust risk management and project planning strategies will help avoid delays, improve ROI, and more accurately predict the long-term cost of initiatives. Planning for unexpected events is a key to success. External factors, such as changing market conditions and evolving government policies are difficult to forecast precisely, so organizations need to build flexibility into project plans to make it easier to adapt to the changes. The report outlines a series of steps executives can take to address these shortcomings and improve their capital-planning processes. Read the full report or take the benchmarking survey and find out how your organization compares.

    Read the article

  • Centralizing a resource file among multiple projects in one solution (C#/WPF)

    - by MarkPearl
    One of the challenges one faces when doing multi language support in WPF is when one has several projects in one solution (i.e. a business layer & ui layer) and you want multi language support. Typically each solution would have a resource file – meaning if you have 3 projects in a solution you will have 3 resource files.   For me this isn’t an ideal solution, as you normally want to send the resource files to a translator and the more resource files you have, the more fragmented the dictionary will be and the more complicated it will be for the translator. This can easily be overcome by creating a single project that just holds your translation resources and then exposing it to the other projects as a reference as explained in the following steps. Step 1 Step 1 -  Add a class library to your solution that will contain just the resource files. Your solution will now have an additional project as illustrated below. Step 2 Reference this project to the other projects. Step 3 Move all the resources from the other resource files to the translation projects resource file. Step 4 Set the translations projects resource files access modifier to public. Step 5 Reference all other projects to use the translation resource file instead of their local resource file. To do this in xaml you would need to expose the project as a namespace at the top of the xaml file… note that the example below is for a project called MaxCutLanguages – you need to put the correct project name in its place.   xmlns:MaxCutLanguages="clr-namespace:MaxCutLanguages;assembly=MaxCutLanguages"   And then in the actual xaml you need to replace any text with a reference to the resource file. <TextBlock Text="{x:Static MaxCutLanguages:Properties.Resources.HelloWorld}"/> End Result You can now delete all the resource files in the other projects as you now have one centralized resource file.

    Read the article

  • How to combat negative SEO?

    - by Perturbed
    Someone has decided to create a hate blog on a hosted blogging service (wordpress.com) that bashes my company. The blog contains posts that completely flame myself, my service, and contains complete falsehoods about how I run my business. Without going into details, I'm pretty sure the author of this blog is an owner of a competing service (although it is authored completely anonymously). Frankly, I'm not sure if the content would qualify for defamation or not, but I really don't like the idea of spending money on a lawyer to even attempt to prove this. I also have no interest in retorting or even replying to the blog in any sort of way -- I feel this would justify the ludicrous claims that have been posted. Unfortunately, whoever wrote the blog was pretty smart about using key words that people commonly use to search for my service. Because my customer base is relatively small and local, our PageRank is not incredibly high. As a result, when someone Google's our business name, this blog is usually within the top five results (thankfully, it's never above the business' actual website, but it's usually within eyeshot). It's incredibly frustrating to hear from customers who have seen the link (luckily, most of the time they think the author is crazy). Is there anything I can do to combat this? Would it be worthwhile to setup my own hosted wordpress.com branded blog, in an effort to trump this wordpress.com with a blog that is more active of my own? TL;DR: Someone made a hate blog using wordpress.com and is now on the first page of my business' search results. What are my options?

    Read the article

  • NHibernate Pitfalls: Cascades

    - by Ricardo Peres
    This is part of a series of posts about NHibernate Pitfalls. See the entire collection here. For entities that have associations – one-to-one, one-to-many, many-to-one or many-to-many –, NHibernate needs to know what to do with their related entities, in three particular moments: when saving, updating or deleting. In particular, there are two possible behaviors: either ignore these related entities or cascade changes to them. NHibernate allows setting the cascade behavior for each association, and the default behavior is not to cascade (ignore). The possible cascade options are: None Ignore, this is the default Save-Update If the entity is being saved or updated, also save any related entities that are either not saved or have been modified and associate these related entities to the root entity. Generally safe Delete If the entity is being deleted, also delete the related entities. This is only useful for parent-child relations Delete-Orphan Identical to Delete, with the addition that if once related entity is removed from the association – orphaned –, also delete it. Also only for parent-child All Combination of Save-Update and Delete, usually that’s what we want (for parent-child relations, of course) All-Delete-Orphan Same as All plus delete any related entities who lose their relationship In summary, Save-Update is generally what you want in most cases. As for the Delete variations, they should only be used if the related entities depend on the root entity (parent-child), so that deleting the root entity and not their related entities would result in a constraint violation on the database.

    Read the article

  • Compressed disk image on Linux

    - by Aaron Digulla
    I just got my new computer with a much bigger harddisk. I think I copied all important files over but just to be sure, I'd like to keep a disk image of my old disk. To save space, I'd like to compress it but I didn't find an option to mount a compressed image. My goals: Result must be easy to access No need to decompress the whole thing before I can access anything Files should be quick to locate - no TAR/CPIO archive Necessary space should be less than just copying the files over So ideally, I'm looking for a read-only, compressed file system which I can create in a file and which grows automatically.

    Read the article

  • Building own kernel on ubuntu

    - by chris
    Hi, I'm trying to build my own kernel, as I want to write a kernel program which I need to compile into the kernel. So what did I do? Download from kernel.org, extract, do the make menuconfig and configure everything as needed, do a make, do a make modules_install, do a make install and finally do a update-grub. Result: It doesn't boot at all.... Now I had a look here and it describes a different way of compiling a kernel. Could this be the reason whz my way did not work? Or does anyone else have an idea why my kernel doesn't work? ######## Edit Great answer, ty. Oli. But I tried it the old fashioned way, and after one hour of compiling I got this message: install -p -o root -g root -m 644 ./debian/templates.master /usr/src/linux-2.6.37.3/debian/linux-image-2.6.37.3meinsmeins/DEBIAN/templates dpkg-gencontrol -DArchitecture=i386 -isp \ -plinux-image-2.6.37.3meinsmeins -P/usr/src/linux-2.6.37.3/debian/linux-image-2.6.37.3meinsmeins/ dpkg-gencontrol: error: package linux-image-2.6.37.3meinsmeins not in control info make[2]: *** [debian/stamp/binary/linux-image-2.6.37.3meinsmeins] Error 255 make[2]: Leaving directory `/usr/src/linux-2.6.37.3' make[1]: *** [debian/stamp/binary/pre-linux-image-2.6.37.3meinsmeins] Error 2 make[1]: Leaving directory `/usr/src/linux-2.6.37.3' make: *** [kernel-image] Error 2

    Read the article

  • How to improve my email communication

    - by SpashHit
    Paraphrase of an email I sent to a colleague I noticed a problem with System A. I have determined that it is not caused by X. I suspect that it is being caused by Y. (since you are in charge of Y) Can you please take a look at it? (The part in parentheses was not included in my email because this colleague knows full well he is in charge of Y... it should be understood that's why I was sending him the email) I run into the colleague half a day later, and say, "did you look at the problem with System A yet?" and he answers, "Oh, yeah, I got your email about System A being broken but I assumed it was probably caused by X, so I was waiting for you to check on that." Obviously my message did not get communicated. This is an all-too-common result of emails I send and is very discouraging... yet, I don't know how I could make my emails any clearer. They are always as brief, clear, and to the point as I can make them. Any suggestions? Maybe there is something inherent about the medium of email that is just not effective for these types of communications?

    Read the article

  • How to make image bigger than the screen to be slideable in the screen in monogame for windows phone 8?

    - by Moses Aprico
    (Idk if my title is correct, because when I google it, there is no related result I guess) I am not sure how to explain it correctly, but I am making a plain 2D, tile based, tactic game in windows phone 8 using monogame. I want to make my map is "slideable". With "slidable" I mean I can draw larger images (in total) than my screen and then slide it so I can view a certain area of the drawn images Example : I have a screen which dimension is 1280x720. I have a 1500x1500px image, which consists of 15 tiles, which is 100x100px each, which each tiles is redrawn each times the "Draw" is called. If the image is larger than the screen, the displayed area will be trimmed and of course, making a 220x780px area that is unseenable. The only way to see all of it is through "sliding" the screen around, so I can see all the area. My question is : How to make that happen? Because in default, the screen is unslideable and the image remains trimmed. Sorry if my question and explanation is not clear enough. Clarify it as much as you like. Thank you.

    Read the article

  • USB webcam detected in KVM, but doesn't work

    - by Gene Vincent
    I have installed XP in a virtual machine running on Linux with QEMU/KVM (qemu-kvm-0.11.0-4.5.2). I export my Linux webcam to KVM using the switches "-usb -usbdevice host:046d:0929". The XP guest sees the webcam and the drivers install, but the camera only shows a black image. When I open the camera in Windows Explorer, it says "0 images" and a black image, while on a real XP, it says "1 image" and shows the video from the camera. I tried the same with a different webcam, but the result is the same. Any ideas what might be wrong or how I could debug this ?

    Read the article

  • How can I copy the output from a remote command into the local clipboard?

    - by cwd
    I use iTerm2 as my terminal client in Mac OS X. On the local system I can use pbcopy and pbpaste to transfer data between the system clipboard and the terminal, but of course this doesn't work when you're ssh'ed to another machine. Is there some way which I can take the result of a command and copy it to the clipboard automatically? Perhaps an applescript to grab the text on the iTerm windows, then get the next to last line? For instance, if I wanted to copy the current working directory: I run pwd, then use the mouse to select the text, and then press command + c. Is there any better / faster / automatic way of doing this? I'm not looking for a bulletproof solution that would work for every command (eg: might not work when there is a huge scrollback) - I'm just looking for something to make this task that I do quite often a little less tedious. Update I'm looking into using screen to do this, but I'm still not sure if it is possible.

    Read the article

  • Limit number of simultaneous connections squid makes to a single server

    - by Ben Voigt
    Note: I am asking about outbound concurrent connection limits, not inbound, which is sufficiently covered on existing questions Modern browsers typically open a large number of simultaneous connections, to take advantage of the fact that TCP fairly shares bandwidth between connections. Of course, this doesn't result in fair sharing between users, so some servers have started penalizing hosts which open too many connections. This limit can be configured client-side (e.g. IE MaxConnectionsPerServer, Firefox network.http.max-connections-per-server), but the method differs for each browser and version, and many users aren't competent to adjust it themselves. So we turn to a squid transparent HTTP proxy for central management of HTTP download. How can the number of simultaneous connections from squid to a remote webserver be limited, so the webserver doesn't perceive it as abuse of concurrent connections? Ideally the limit would be per source address. Squid should accept virtually unlimited concurrent requests from the client browser, and issue them sequentially to the remote server, only N at a time, delaying (but not dropping) the others.

    Read the article

  • Ethernet Connection Unavailable

    - by fabikw
    I'm running Ubuntu Server 12.04 on a laptop with an Intel NIC (driver e1000e). When I connect the ethernet port to the internet (college network, DHCP) it works out of the box. Now I'm trying to connect it to a networked USRP (if you want to know what it is). A friend of mine managed to do this in his laptop (running regular Ubuntu 12.04) just by setting up a new Wired connection in the Network Manager with appropriate addresses. However, when I do the same, no wired connections are available. The output of nmcli -p dev is =========================================== Status of devices =========================================== DEVICE TYPE STATE ------------------------------------------- wlan0 802-11-wireless connected eth0 802-3-ethernet unavailable but the cable is connected to the device and the device is powered up. Any idea how to solve this? UPDATE: After stopping the network-manager service, setting up the connection manually and starting the service again, it now detects the ethernet connection. However, the device still can't receive data and doesn't answer to pings. UPDATE 2: As suggested I tried using a cross over cable but the result is exactly the same. However, I found out that connecting the device to the dock (as opposed to directly to the laptop) works fine. I know that the ethernet port in the laptop works fine, because connecting to the network through it works. Is it possible that the port in the laptop doesn't support Gb Ethernet (because that's what the device requires) but the one in the dock does?

    Read the article

  • virtualbox 2 vmware disk

    - by anol
    I have a virtualbox disk I'd like to convert to a vmware disk. The disk is dynamic which makes it a lot more trickier. If I follow the instructions at http://xpapad.wordpress.com/2010/02/21/migrating-from-virtualbox-to-vmware-in-linux, the vdi-to-raw conversion will result in a 2 TB file. I don't even have that much disk space! The first step therefore seems to be a dynamic to static conversion of the virtualbox disk, right? How do I do that or is there perhaps a better way to convert to vmware? Help!

    Read the article

  • HTTP resource caching / fetching

    - by Bobby Jack
    I'm trying to optimise a page, and I'm seeing some strange behaviour. Each time I click on a link to the page, all resources are fetched from the server, responding with 200s. However, when I refresh the page (specifically, F5 in Firefox), all resources return a 304 and - of course - the page loads much faster as a result. The main page returns a 200 in both cases. In the refresh case, If-Modified-Since headers are sent with the requests to the resources. However, in the 'clicking a link' case, they are not. What's the reason for that, and can I control it?

    Read the article

< Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >