Search Results

Search found 1351 results on 55 pages for 'pain'.

Page 38/55 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • GroupWise 6.5 IMAP Service Shuts Down-Needs Constant Restarting

    - by Jeffrey Majette
    I have an issue with my aging (and soon to be replaced) GroupWise 6.5 system. The IMAP service in the GWIA tends to periodically shut down, requiring that I restart the service. When IMAP is down, my end users who use Blackberry BIS cannot send or receive email on their devices. It's a royal pain to have to restart the IMAP service several times a day. The GWIA logs do not seem to indicate a problem. I thought I was on to something yesterday. I discovered that the gwia.cfg file located in SYS:SYSTEM was actually from GW 6.0. The gwia.cfg in DOMAIN\WPGATE\GWIA was titled for GW 6.5 when opening the file for editing. I changed the generic placeholder info in the file to match my environment and restarted the gwia. However, it made no difference in performance. The IMAP service shuts down about 30 minutes after the last restart. I know this version of GW is antiquated. We are in the process of migrating to Google Apps. But, if anyone has an idea that could fix this issue, I and my end user community would be forever grateful!

    Read the article

  • Script to mirror MS SQL Server databases between 2 servers

    - by David W
    Hi I have about 200 sites each of which have 2 servers running MSSQL (2k5 at some sites, 2k8 at others) One server is production and the other is primarily there as a backup. We're rebuilding all of these servers this year and as part of that we will have to set up mirroring for ... a lot ... of databases. Some of these sites have 45 databases so mirroring them manually is going to be a huge pain. I was going to write a batch script which uses SQLCMD to backup the database and log, copies to the secondary server, restores the backup and log with norecovery, creates the endpoints and sets the partner. This in itself isn't too complicated, but i'd love to see what other people have done as i'm not very confident in catching errors using the process i've outlined above. I've seen Tools to manage sql 2008 database mirroring? Which looks really good, but the formatting is jumbled and I can't get it to work. If anyone has any other scripts they've written and are willing to share I'd be eternally grateful. Ideally I'd love to be able to use a script to ensure there are matching endpoints (same ports) on both servers, backup the database, backup the log, copy the backups to second server, restore database and log with norecovery, set the partners on both servers, and somehow confirm that the databases are linked and synchronized. Well, thanks for reading :)

    Read the article

  • 7-Zip many files from different folders?

    - by mafutrct
    I would like to add a large number of files with different names from different folders to a single 7-Zip archive using 7za.exe. This should be simple, but it turned out to be a major pain. I created a file that contains the paths (7za a out.7z @list.txt), but once there are too many (~100) files, it fails. Apparently the content of the argument file is pushed onto the command line buffer [Edit: This was likely a misinformation on my part, either way it was not the reason], which is far too small (the number of files to add is more than one million). Splitting the process up by adding the files one by one is not feasible due to the way 7za works: When adding the next file, it creates a copy of the archive, adds the file to the copy and finally replaces the original. This is terribly slow once the archive gets to a couple 100 MB in size. So far I am using a combination of the two approaches by adding a dozen files each time in a loop, but it is an unreliable hack and still very slow. Is there a better way to do it? I tried to use 7-Zip wrapper DLLs (I'm a C# programmer), but none of them worked reliably and I was repeatedly suggested to just use 7za instead.

    Read the article

  • Running WAMP (XAMPP) and LAMP from One SSD, On 64-bit Windows and Linux Machines

    - by nicorellius
    I have an solid state drive that I develop websites on. The reason I do this is because I work on a few different computers. Historically, I created separate developing environments to use for each machine. This was OK, but if the system changed for some reason, eg, new OS install, it was a pain. So I bought a USB 3.0 enclosure and put a solid state drive in there and it's pretty darn fast, which is good. I was working with three Windows machines and I could simply hook up the drive, launch my XAMPP server and away I went, developing websites: using Dreamweaver, Komodo, Notepad++, Eclipse, etc. Recently, however, one of my Windows machines' hard drive went down and instead of going back to Windows in this case, I went with Ububntu 12.04. I have several Ubuntu workstations and servers and I like Linux, so I thought his was a great opportunity to transition. I went to work installing and trying to set up a LAMP server and, besides from XAMPP 64-bit compatibility out of the box, I'm seeing other issues with getting this Linux server running. I will keep trying to resolve this, but in the meantime... my question is, has anyone ever successfully run both WAMP and LAMP from the same SSD (formatted to NTFS)? I'm sure there are lots of barriers to this happening, like local file system, OS libraries, dependencies, etc. But I was thinking it would be cool if it could be done. I'm no expert, so if this is just plain old stupid, please don't hesitate to let me know.

    Read the article

  • Copying files to my laptop makes them locked

    - by John
    When I save files from e.g. remote desktop or from an email (outlook) attachments, or from skype even to my local machine they show a locked Icon on the file. Then e.g. SQL Server doesn't let me restore backups as it says the operating system doesn't have access to the file. I've had success fixing this by setting the ownership of the parent folder to my user and then let it apply to sub folders. Also sometimes I need to click - Proerties - Security - Advanced - Change Permmissions, then check "change child permissions..." and apply on the parent dir. I'm using Windows 7 64 bit Proffessional, on HP Probook 4530, and I have a administrator user. This is a real pain to do everytime. I suspect it might be because of HP software that came with the laptop, I think there is drive encryption as part of the protect tools. Although I'm hoping there's something in windows i can set to change the behaviour to not lock these files.

    Read the article

  • Is it possible to map static IP to computer name instead of MAC address?

    - by xenon
    I have a number of computers with different hostnames connected to the network. They currently hold a static IP address based on their MAC address. In other words, the static IP address is mapped to their MAC address. This gives rise to a problem and that's when we swap the harddrive from one computer to another, the MAC address becomes different and the application we are running on the harddrive has problem getting the right static IP for it to work. We can't configure the IP address in the application all the time. And changing the static IP addresses to re-map to the computer's new MAC address can be quite a pain. Since all the computers have a unique computer name as their hostname, is it possible to configure such that when these computers grab IP addresses from the DHCP server, DHCP will learn about their hostname and assign the correct IP address? This is to say, the static IP is mapped to the computers' hostname instead of their MAC address. All the computers are running on Windows 7. Would this be possible? If so how should I go about do this?

    Read the article

  • Slower/cached Linux file system required

    - by Chopper3
    I know it sounds odd but I need a slower or cached filesystem. I have a lot of firewalls that are syslog'ing their data to a pair of Linux VMs which write these files to their 'local' (actually FC SAN attached) ext3-formatted disks and also forward the messages to our Splunk servers. The problem is that the syslog server is writing these syslog messages as hundreds, sometimes thousands, of tiny ~4k writes per second back to our FC SAN - which can handle this workload right now but our FW traffic's going to be growing by at least a factor of 5000% (really) in coming months and that'll be a pain for the SAN, I want to fix the root cause before it's a problem. So I need some help figuring out a way of getting these writes cached or held-off in some way from the 'physical' disks so that the VMs fire off larger, but less frequent, writes - there's no way of avoiding these writes but there's no need for it to do so many tiny ones. I've looked at the various ext3 options, setting noatime and nodiratime but that's not made much of a dent in the problem. Obviously I'm investigating other file systems but thought I'd throw this out in case others have the same problem in the future. Oh and I can't just forward these messages to Splunk, our firewall team insist they're in their original format for diag purposes.

    Read the article

  • How to make Thunderbird play nice with Google mail

    - by Christi
    Thunderbird and gmail aren't exactly the best of friends. Gmail's tags mean that Thunderbird often downloads multiple copies of a single mail. Anything tagged in gmail will appear in a folder related to that tag, the "all mail" folder, and possibly the "inbox" and "sent mail" folders too. Thus a mail with multiple tags could potentially be stored more than four times in a local Thunderbird cache. This can make searching difficult, and is obviously wasteful of disk space. The best solution I have come up with is as follows. Operate a zero inbox policy (i.e. use the inbox for processing live mail only and archive everything else) which eliminates an extra copy in the inbox. Secondly, configure Thunderbird not to sync the "Sent Mail" folder - this is a bit of a pain, since I actually find it quite useful to be able to look through just the mails I've sent, but a search can duplicate this functionality. In this way, most of the duplicates are removed, and only mail with tags is stored locally more than once. Ideally, however, I'd only like one copy of each mail to be stored locally. I am surprised Thunderbird doesn't store mail by some sort of hashing algorithm to prevent precisely this problem - but it wouldn't be compatible with the way the folders are mirrored in a local directory structure, I suppose. Can anyone think of a better way to get Thunderbird to cache a Google mail account locally efficiently.

    Read the article

  • Can Windows Media Player create playlists based on folder structure?

    - by Chaulky
    Over the years I've carefully molded my digital media collection into a series of folders that make it easy for me to find what I'm looking for. I recently discovered the awesomeness that is streaming video from Windows 7 Media Player to the PS3 so I can watch it on the big screen without all the hassle of hooking the computer up to the TV. The problem is, I totally lose my carefully crafted folder structure and all my videos become one giant mess again... not cool! As a temporary solution, I've created a few playlists for my favorites (Dexter Season 4, Dexter Season 5, Breaking Bad Season 1, etc.). This is a HUGE pain in the a$$. So, is there a way to get Windows Media Player (on Windows 7) to maintain some sort of folder structure based on the location of the actual video files? So if I have my videos sorted into folders by show and season, Media Player will pick that up and let me browse it in the same way. As an alternative answer, I'll accept suggestions for a program that can also stream to PS3 and has this "folder organization" feature.

    Read the article

  • OS X Application icons missing after filesystem tampering

    - by dylan
    A while back I installed some Google software, I forget what it was, but it was something that attached itself to Google Chrome and was un-uninstallable short of uninstalling all of Chrome. It was a total pain in the a$$. Anyways, I was trying to put up a fight before going nuclear and just wiping Chrome, and during that fight, something went wrong (i.e., I didn't know what I was doing). I remember messing around with the Applications folder somehow. I don't remember what exactly I did, but it perhaps involved some interchanging between the /Applications and /User/Username/Applications folders? Definitely some tampering in those areas, I'm sure at least about that. Anyways, I've since updated and restarted my computer. Now, while all my apps work fine, many of them have blank icons - not on the Dock, those icons are fine, but in Spotlight search results, etc. Currently, my /Applications folder contains only OS X-cooked-in applications. My ~/Applications folder contains cooked in apps AND post-OS/third-party apps. There hasn't been any functionality deficit so far, but I don't like working on a machine where something isn't how it should be. I know my information was vague, but does anyone know what went wrong / how to fix it? OS X Mavericks 10.9.3

    Read the article

  • How do I automatically connect my client to an ODBC data source on another machine with dynamic IP?

    - by Kdansky
    At the customer's place, we've got a postgres DB on a server, and a few clients. We connect them through ODBC-drivers, and all machines run windows (usually XP). Now we had a few annoying issues: The client "forgets" some flags in the ODBC drivers, such as ByteA as LO. Every time anything changes, we have to reset that, and type in the password, and sometimes even the IP of the server. On x64 machines running Windows 7, configuring this is a pain as the system settings dialogue will only show 64-bit connections by default. And most importantly: If the server changes IP because the customer restarts or replaces a switch, all connections are lost. Annoyingly, this cannot be fixed with just correcting the IP, but rather, we have to check every single place (even hba_conf) because all the settings magically disappear. Our customers often are very small companies, where "server" means "that one PC in the other room", and not "Oracle mainframe in the dungeon", so we don't want to rely on them not restarting switches. Is there a better way than to rely on these really unstable settings? Are these settings somewhere in a file which I could edit manually, to make fixing it easier?

    Read the article

  • Can Windows Media Player create playlists based on folder structure?

    - by Chaulky
    Over the years I've carefully molded my digital media collection into a series of folders that make it easy for me to find what I'm looking for. I recently discovered the awesomeness that is streaming video from Windows 7 Media Player to the PS3 so I can watch it on the big screen without all the hassle of hooking the computer up to the TV. The problem is, I totally lose my carefully crafted folder structure and all my videos become one giant mess again... not cool! As a temporary solution, I've created a few playlists for my favorites (Dexter Season 4, Dexter Season 5, Breaking Bad Season 1, etc.). This is a HUGE pain in the a$$. So, is there a way to get Windows Media Player (on Windows 7) to maintain some sort of folder structure based on the location of the actual video files? So if I have my videos sorted into folders by show and season, Media Player will pick that up and let me browse it in the same way. As an alternative answer, I'll accept suggestions for a program that can also stream to PS3 and has this "folder organization" feature.

    Read the article

  • Is there any functional-like unix shell?

    - by Caruccio
    I'm (really) newbie to functional programming (in fact only had contact with it using python) but seems to be a good approach for some list-intensive tasks in a shell environment. I'd love to do something like this: $ [ git clone $host/$repo for repo in repo1 repo2 repo3 ] Is there any Unix shell with these kind of feature? Or maybe some feature to allow easy shell access (commands, env/vars, readline, etc...) from within python (the idea is to use python's interactive interpreter as a replacement to bash). EDIT: Maybe a comparative example would clarify. Let's say I have a list composed of dir/file: $ FILES=( build/project.rpm build/project.src.rpm ) And I want to do a really simple task: copy all files to dist/ AND install it in the system (it's part of a build process): Using bash: $ cp ${files[*]} dist/ $ cd dist && rpm -Uvh $(for f in ${files[*]}; do basename $f; done)) Using a "pythonic shell" approach (caution: this is imaginary code): $ cp [ os.path.join('dist', os.path.basename(file)) for file in FILES ] 'dist' Can you see the difference ? THAT is what i'm talking about. How can not exits a shell with these kind of stuff build-in yet? It's a real pain to handle lists in shell, even its being a so common task: list of files, list of PIDs, list of everything. And a really, really, important point: using syntax/tools/features everybody already knows: sh and python. IPython seams to be on a good direction, but it's bloated: if var name starts with '$', it does this, if '$$' it does that. It's syntax is not "natural", so many rules and "workarounds" ([ ln.upper() for ln in !ls ] -- syntax error)

    Read the article

  • Windows 7 misses keystrokes from internal keyboard after hibernation (on Acer Aspire 5820)

    - by ron
    I face a very strange symptom on my Acer Aspire laptop (with the factory default Win7 install and divers. Windows update running). After waking the computer from hibernation, it is a pain to type, since on average 5-10 keypresses are missing per 100 presses, using the laptop's keyboard. Steps to reproduce: 1) Power off 2) Power on, wait for system to become usable 3) Open notepad, for 5 times do hit 10x the same character. This gives a similar pattern of 50 chars total: xxxxxxxxxxyyyyyyyyyyaaaaaaaaaassssssssssdddddddddd 4) Optionally repeat. Everything is fine this far. 5) Hibernate. 6) Power on and resume. 7) Repeat steps 3)-4). This time approximately 3-5 character will be missing from each 50 characters. What I ruled out: putting to Sleep or just Locking and resuming from there does not cause problem battery / AC usage does not matter net connection does not matter running processes seem to be the same before and after hibernation key press speed doesn't really matter. For the test I use a nominal 3-5 strokes/second beat. plugging in an external USB keyboard works fine, but the built-in one still misbehaves What could be the problem? How could I diagnose if the keypresses arrive in, but get swallowed at some point? (maybe some nasty keyboard handler hook misbehaves?). Update: It seems that pushing the PowerSmart button and toggling to power saving state fixes the problem. Also, toggling it again back to the original state keeps it fixed. So this may be a fine workaround, but is not a conforming solution.

    Read the article

  • Windows 7 misses keystrokes from internal keyboard after hibernation (on Acer Aspire 5820)

    - by ron
    I face a very strange symptom on my Acer Aspire laptop (with the factory default Win7 install and divers. Windows update running). After waking the computer from hibernation, it is a pain to type, since on average 5-10 keypresses are missing per 100 presses, using the laptop's keyboard. Steps to reproduce: 1) Power off 2) Power on, wait for system to become usable 3) Open notepad, for 5 times do hit 10x the same character. This gives a similar pattern of 50 chars total: xxxxxxxxxxyyyyyyyyyyaaaaaaaaaassssssssssdddddddddd 4) Optionally repeat. Everything is fine this far. 5) Hibernate. 6) Power on and resume. 7) Repeat steps 3)-4). This time approximately 3-5 character will be missing from each 50 characters. What I ruled out: putting to Sleep or just Locking and resuming from there does not cause problem battery / AC usage does not matter net connection does not matter running processes seem to be the same before and after hibernation key press speed doesn't really matter. For the test I use a nominal 3-5 strokes/second beat. plugging in an external USB keyboard works fine, but the built-in one still misbehaves What could be the problem? How could I diagnose if the keypresses arrive in, but get swallowed at some point? (maybe some nasty keyboard handler hook misbehaves?). Update: It seems that pushing the PowerSmart button and toggling to power saving state fixes the problem. Also, toggling it again back to the original state keeps it fixed. So this may be a fine workaround, but is not a conforming solution.

    Read the article

  • ImageMagick installation confusion

    - by Codemonkey
    Well this is turning out to be a pain. I'm on CentOS 6.5. I saw on the PECL ImageMagick changelog that they added a load of constants for new filters, such as LANCZOS2SHARP. Some testing I did earlier suggested that PhotoShop 6.5 was able to downsize photos with better clarity than my current ImageMagick's best effort of LANCZOS, so I thought I'd try to upgrade to test out the newer filters. So, first port of call - get source from pecl for 3.2.0RC1. Installed with no problems. But, ah-ha. Although it says it only requires IM 6.2.4, the filters I'm after don't work unless you have 6.6.6+ So I go http://www.imagemagick.org/script/install-source.php to install the newest version. This is the bit that's puzzled me. It all appears to have installed fined. The tests work fine, passing 40 out of 40. If I run identify -version on the command line it outputs 6.8.9 But if I echo Imagick::getVersion() in PHP it shows 6.5.4, even after restarting php-fpm. rpm -qa | grep ImageMagick shows that I still have 6.5.4 installed locate ImageMagic also only seems to show the 6.5.4 one I feel that the missing link here is ImageMagick-devel, do I need to install that too? How do I go about doing that? Or do I just need to reinstall the pecl-imagemagick 3.2.0RC1 now that I have the latest IM installed?

    Read the article

  • Grub Installation Failed: Fatal Error ... now what I do?

    - by eklavya
    I know there are some threads that touch this but I feel I have done something uniquely stupid. hence the post and plea for help. I am a beginner @ Linux. So I have a PC with a HDD (hard disk drive) and SSD (solid state drive) It was running Linux Mint /dev/sda1 - HDD Partition 1 - 2 TB (mounted this is /home /dev/sda2 - HDD Partition 2 - 1 TB (separate back up drive, i was backing up files to this) /dev/sdb1 - SSD Partition 1 - 100 GB (OS) /dev/sdb2 - SSD Partition 2 - 20 GB (Swap) The operating system was Linux Mint and was installed on the /dev/sdb1 i.e the solid state drive. I had partitioned off the sda into 2 TB and 1TB and presented the 2 TB as the /home to the OS. Anyway last night I decided to make a return to Ubuntu via the path of Elementary OS. Everything went fine with the install until it stated that GRUB installed failed and this was a Fatal error (no kidding I said). No I am stuck. I have definitely done something wrong and don't know what it is... My biggest pain is the files on the /dev/sda2. I want to save these before I try something drastic like wiping off the /dev/sda completely. So I have the following questions... Can I use a liveCD USB to save these files ? I can see the /dev/sda2 but was unable to access the files in the liveCD last not least ... how do I fix the main issue here. Why could the OS not install GRUB 2b... why is my SSD the /dev/sdb ... and not /dev/sda. Does that have something to do with it that my master boot record sits on the HDD /dev/sda and not /dev/sdb

    Read the article

  • All USB ports on my laptop are dead, any options via Ethernet, SD/MMC or HDMI?

    - by carbontracking
    My son's laptop has taken alot of pain in his school over the last few months and he and his buddies have succeeded in breaking both USB ports. I've opened the box, unsoldered the USB ports, replaced them by new components but no joy - the ports seem dead. If I assume that the insertion of LEGO pieces, etc. in USB ports has rendered them unsalvageable, do I have any other options for restoring USB access to the laptop? The laptop has an ethernet port, a HDMI port and an SD/MMC port. I've trawled the web for a magic adadpter, i.e; ethernet=USB, HDMI=USB or SD/MMC=USB but to no avail. Lots of options for going the other way though. Does anyone have any ideas on the feasibility of an ethernet=USB cable? Ethernet doesn't seem to have +5V or GND so I can run a cable from the motherboard that could provide those. Amazing how many functions of a laptop just disappear when you have no USB ports.

    Read the article

  • OpenVPN / iptables restrict some access

    - by RitonLaJoie
    I want to create an openvpn service on a dedicated server I have, for some friends so that they are able to play online games faster. Is there an easy way to restrict which traffic I allow them with iptables ? It seems iptable is not very easy to maintain and we can easily get kicked out of our own server. Rebooting on a rescue mode every time I would get kicked out because of bad iptable rules would just be a pain. As far as I understand, the tun interface would be providing the access. Which kind of rule in iptables would I have to implement to restrict their access to only 1 ip ? Also, I don't want this vpn to be the default gateway for all the traffic. I guess I should go with the option of pushing a route to the clients so that they connect to the IP of the game server through the VPN and use their regular routes through their ISP for all the other traffic ? As a side not, it seems Openvpn AS is not very robust. Is there some other (commercial is ok) product that would give me the same administration options through a web interface ? Is Webmin the only other solution ? Thanks !

    Read the article

  • Handling WCF Service Paths in Silverlight 4 – Relative Path Support

    - by dwahlin
    If you’re building Silverlight applications that consume data then you’re probably making calls to Web Services. We’ve been successfully using WCF along with Silverlight for several client Line of Business (LOB) applications and passing a lot of data back and forth. Due to the pain involved with updating the ServiceReferences.ClientConfig file generated by a Silverlight service proxy (see Tim Heuer’s post on that subject to see different ways to deal with it) we’ve been using our own technique to figure out the service URL. Going that route makes it a peace of cake to switch between development, staging and production environments. To start, we have a ServiceProxyBase class that handles identifying the URL to use based on the XAP file’s location (this assumes that the service is in the same Web project that serves up the XAP file). The GetServiceUrlBase() method handles this work: public class ServiceProxyBase { public ServiceProxyBase() { if (!IsDesignTime) { ServiceUrlBase = GetServiceUrlBase(); } } public string ServiceUrlBase { get; set; } public static bool IsDesignTime { get { return (Application.Current == null) || (Application.Current.GetType() == typeof (Application)); } } public static string GetServiceUrlBase() { if (!IsDesignTime) { string url = Application.Current.Host.Source.OriginalString; return url.Substring(0, url.IndexOf("/ClientBin", StringComparison.InvariantCultureIgnoreCase)); } return null; } } Silverlight 4 now supports relative paths to services which greatly simplifies things.  We changed the code above to the following: public class ServiceProxyBase { private const string ServiceUrlPath = "../Services/JobPlanService.svc"; public ServiceProxyBase() { if (!IsDesignTime) { ServiceUrl = ServiceUrlPath; } } public string ServiceUrl { get; set; } public static bool IsDesignTime { get { return (Application.Current == null) || (Application.Current.GetType() == typeof (Application)); } } public static string GetServiceUrl() { if (!IsDesignTime) { return ServiceUrlPath; } return null; } } Our ServiceProxy class derives from ServiceProxyBase and handles creating the ABC’s (Address, Binding, Contract) needed for a WCF service call. Looking through the code (mainly the constructor) you’ll notice that the service URI is created by supplying the base path to the XAP file along with the relative path defined in ServiceProxyBase:   public class ServiceProxy : ServiceProxyBase, IServiceProxy { private const string CompletedEventargs = "CompletedEventArgs"; private const string Completed = "Completed"; private const string Async = "Async"; private readonly CustomBinding _Binding; private readonly EndpointAddress _EndPointAddress; private readonly Uri _ServiceUri; private readonly Type _ProxyType = typeof(JobPlanServiceClient); public ServiceProxy() { _ServiceUri = new Uri(Application.Current.Host.Source, ServiceUrl); var elements = new BindingElementCollection { new BinaryMessageEncodingBindingElement(), new HttpTransportBindingElement { MaxBufferSize = 2147483647, MaxReceivedMessageSize = 2147483647 } }; // order of entries in collection is significant: dumb _Binding = new CustomBinding(elements); _EndPointAddress = new EndpointAddress(_ServiceUri); } #region IServiceProxy Members /// <summary> /// Used to call a WCF service operation. /// </summary> /// <typeparam name="T">The type of EventArgs that will be returned by the service operation.</typeparam> /// <param name="callback">The method to call once the WCF call returns (the callback).</param> /// <param name="parameters">Any parameters that the service operation expects.</param> public void CallService<T>(EventHandler<T> callback, params object[] parameters) where T : EventArgs { try { var proxy = new JobPlanServiceClient(_Binding, _EndPointAddress); string action = typeof (T).Name.Replace(CompletedEventargs, String.Empty); _ProxyType.GetEvent(action + Completed).AddEventHandler(proxy, callback); _ProxyType.InvokeMember(action + Async, BindingFlags.InvokeMethod, null, proxy, parameters); } catch (Exception exp) { MessageBox.Show("Unable to use ServiceProxy.CallService to retrieve data: " + exp.Message); } } #endregion } The relative path support for calling services in Silverlight 4 definitely simplifies code and is yet another good reason to move from Silverlight 3 to Silverlight 4.   For more information about onsite, online and video training, mentoring and consulting solutions for .NET, SharePoint or Silverlight please visit http://www.thewahlingroup.com.

    Read the article

  • SQL Server Configuration timeouts - and a workaround [SSIS]

    - by jamiet
    Ever since I started writing SSIS packages back in 2004 I have opted to store configurations in .dtsConfig (.i.e. XML) files rather than in a SQL Server table (aka SQL Server Configurations) however recently I inherited some packages that used SQL Server Configurations and thus had to immerse myself in their murky little world. To all the people that have ever gone onto the SSIS forum and asked questions about ambiguous behaviour of SQL Server Configurations I now say this... I feel your pain! The biggest problem I have had was in dealing with the change to the order in which configurations get applied that came about in SSIS 2008. Those changes are detailed on MSDN at SSIS Package Configurations however the pertinent bits are: As the utility loads and runs the package, events occur in the following order: The dtexec utility loads the package. The utility applies the configurations that were specified in the package at design time and in the order that is specified in the package. (The one exception to this is the Parent Package Variables configurations. The utility applies these configurations only once and later in the process.) The utility then applies any options that you specified on the command line. The utility then reloads the configurations that were specified in the package at design time and in the order specified in the package. (Again, the exception to this rule is the Parent Package Variables configurations). The utility uses any command-line options that were specified to reload the configurations. Therefore, different values might be reloaded from a different location. The utility applies the Parent Package Variable configurations. The utility runs the package. To understand how these steps differ from SSIS 2005 I recommend reading Doug Laudenschlager’s blog post Understand how SSIS package configurations are applied. The very nature of SQL Server Configurations means that the Connection String for the database holding the configuration values needs to be supplied from the command-line. Typically then the call to execute your package resembles this: dtexec /FILE Package.dtsx /SET "\Package.Connections[SSISConfigurations].Properties[ConnectionString]";"\"Data Source=SomeServer;Initial Catalog=SomeDB;Integrated Security=SSPI;\"", The problem then is that, as per the steps above, the package will (1) attempt to apply all configurations using the Connection String stored in the package for the "SSISConfigurations" Connection Manager before then (2) applying the Connection String from the command-line and then (3) apply the same configurations all over again. In the packages that I inherited that first attempt to apply the configurations would timeout (not unexpected); I had 8 SQL Server Configurations in the package and thus the package was waiting for 2 minutes until all the Configurations timed out (i.e. 15seconds per Configuration) - in a package that only executes for ~8seconds when it gets to do its actual work a delay of 2minutes was simply unacceptable. We had three options in how to deal with this: Get rid of the use of SQL Server configurations and use .dtsConfig files instead Edit the packages when they get deployed Change the timeout on the "SSISConfigurations" Connection Manager #1 was my preferred choice but, for reasons I explain below*, wasn't an option in this particular instance. #2 was discounted out of hand because it negates the point of using Configurations in the first place. This left us with #3 - change the timeout on the Connection Manager. This is done by going into the properties of the Connection Manager, opening the "All" tab and changing the Connect Timeout property to some suitable value (in the screenshot below I chose 2 seconds). This change meant that the attempts to apply the SQL Server configurations timed out in 16 seconds rather than two minutes; clearly this isn't an optimum solution but its certainly better than it was. So there you have it - if you are having problems with SQL Server configuration timeouts within SSIS try changing the timeout of the Connection Manager. Better still - don't bother using SQL Server Configuration in the first place. Even better - install RC0 of SQL Server 2012 to start leveraging SSIS parameters and leave the nasty old world of configurations behind you. @Jamiet * Basically, we are leveraging a SSIS execution/logging framework in which the client had invested a lot of resources and SQL Server Configurations are an integral part of that.

    Read the article

  • Sneak peek at next generation Three MiFi unit – Huawei E585

    - by Liam Westley
    Last Wednesday I was fortunate to be invited to a sneak preview of the next generation Three MiFi unit, the Huawei E585. Many thanks to all those who posted questions both via this blog or via @westleyl on Twitter. I think I made sure I asked every question posed to the MiFi product manager from Three UK, and so here's the answers you were after. What is a MiFi? For those who are wondering, a MiFi unit is a 3G broadband modem combined with a WiFi access point, providing 3G broadband data access to up to five devices simultaneously via standard WiFi connections. What is different? It appears the prime task of enhancing the MiFi was to improve the user experience and user interface, both in terms of the device hardware and within the management software to configure the device.  I think this was a very sensible decision as these areas had substantial room for improvement. Single button operation to switch on, enable WiFi and connect to 3G Improved OELD display (see below), replacing the multi coloured LEDs; including signal strength, SMS notifications, the number of connected clients and data usage Management is via a web based dashboard accessible from any web browser. This is a big win for those running Linux, Mac OS/X, iPad users and, for me, as I can now configure the device from Windows 7 64-bit Charging is via micro USB, the new standard for small USB devices; you cannot use your old charger for the new MiFi unit Automatic reconnection when regaining a signal Improved charging time, which should allow recharging of the device when in use Although subjective, the black and silver design does look more classy than the silver and white plastic of the original MiFi What is the same? Virtually the same size and weight The battery is the same unit as the original MiFi so you’ll have a handy spare if you upgrade Data plans remain the same as the current MiFi, so cheapest price for upgraders will be £49 pay as you go Still only works on 3G networks, with no fallback to GPRS or EDGE There is no specific upgrade path for existing three customers, either from dongle or from the original MiFi My opinion I think three have concentrated on the correct areas of usability and user experience rather than trying to add new whizz bang technology features which aren’t of interest to mainstream users. The one button operation and the improved device display will make it much easier to use when out and about. If the automatic reconnection proves reliable that will remove a major bugbear that I experienced the previous evening when travelling on the First Great Western line from Paddington to Didcot Parkway.  The signal was repeatedly lost as we sped through tunnels and cuttings, and without automatic reconnection is was a real pain to keep pressing the data button on the MiFi to re-establish my data connection. And finally, the web based dashboard will mean I no longer need to resort to my XP based netbook to configure the SSID and password. My everyday laptop runs Windows 7 64-bit which appears to confuse the older 3 WiFi manager which cannot locate the MiFi when connected. Links to other sites, and other images of the device Good first impressions from Ben Smith, http://thereallymobileproject.com/2010/06/3uk-announce-a-new-mifi-with-a-screen/ Also, a round up of other sneak preview posts, http://www.3mobilebuzz.com/2010/06/11/mifi-round-two-your-view/ Pictures Here is a comparison of the old MiFi device next to the new device, complete with OLED display and the Huawei logo now being a prominent feature on the front of the device. One of my fellow bloggers had a Linux based netbook, showing off the web based dashboard complete with Text messages panel to manage SMS. And finally, I never thought that my blog sub title would ever end up printed onto a cup cake, ... and here's some of the other cup cakes ...

    Read the article

  • I’m a Phoenix… and I’m miffed

    - by Stan Spotts
    For personal reasons, almost 30 years ago I left school to enter the workforce. I decided late 2008 to go back to school and finish my degree. After the expected loss of credits for a transfer, from Temple University to University of Phoenix, I'm now about 75% done. The experience has been interesting. Classes are time compressed, only 5 weeks each. Because I have a family and a full time job, I'm taking one at a time. Even so, I've written more papers in these classes than I ever wrote at Temple. My own papers are one thing, but the team papers give me heartburn since I can't completely control what goes into them. Not a big deal except that they make up 30% of our grade. In any case, most of the class facilitators have been great. I had great ones for Accounting, Finance, and frankly most others. I've had a few (4, maybe) cases where I was less than 2 points from an A, and asked the facilitator if I could get any of my work reviewed to see if I could get those extra points. I figured it was worth a shot, and there were no extenuating circumstances to help make my case. I think that only one facilitator decided after a review of one paper that my interpretation was good, just not what he expected, and gave me another point, which gave me an A. So while none are pushovers, they've all been open to discussion, which is as much as I should expect. Overall, good experience. That is, until my last class. On the second week, the day I was due to hand in my personal assignment for the week, I was in an accident. An SUV creamed my little Ford Focus, and totaled it (estimated repair over $11K). I was pretty banged up, especially my left shoulder. I was scheduled for rotator cuff surgery for two weeks later, and getting hit against the door really made it worse. After dealing with the police, the EMT, the tow truck, and the Percocet and Flexeril for the pain, I crashed for the night and didn't get to upload my paper until the next day. The instructor took 30% off for it being late, even after I supplied photos of the car, my arm (huge bruises), and offered to supply the police report number. I figured I'd be okay since that's 2.7 points, and I could lose up to 5 before jeopardizing an A grade. Well, that wasn't the case as we lost more points than I expected on our team paper in Week 5. I ended up with a 94.3. Yes, 7/10 of a point from an A. Of course I asked the instructor to review the issue with the accident and give me just the 0.7 points I needed for the A. That got me a short response of "I have received your emails and review your work over the last five weeks. Your current grade will stand. If you would like to dispute your grade then please feel free to contact your academic advisor. I wish you much success in your professional and academic career." Brrrr….! So I asked my academic advisor to file a dispute. If it wasn't that a pretty bad car accident was the cause, I wouldn't have. Without the grade reduction, I would have had a 97 for the class, so I'll argue that I was performing at the A level throughout the class. Why her purported "review" of my work didn't then warrant such a minor adjustment, I don't know. An A- drops my GPA, and this ticked me off. Now I have to wait and see what the school says about the grade dispute.

    Read the article

  • Beware when using .NET's named pipes in a windows forms application

    - by FransBouma
    Yesterday a user of our .net ORM Profiler tool reported that he couldn't get the snapshot recording from code feature working in a windows forms application. Snapshot recording in code means you start recording profile data from within the profiled application, and after you're done you save the snapshot as a file which you can open in the profiler UI. When using a console application it worked, but when a windows forms application was used, the snapshot was always empty: nothing was recorded. Obviously, I wondered why that was, and debugged a little. Here's an example piece of code to record the snapshot. This piece of code works OK in a console application, but results in an empty snapshot in a windows forms application: var snapshot = new Snapshot(); snapshot.Record(); using(var ctx = new ORMProfilerTestDataContext()) { var customers = ctx.Customers.Where(c => c.Country == "USA").ToList(); } InterceptorCore.Flush(); snapshot.Stop(); string error=string.Empty; if(!snapshot.IsEmpty) { snapshot.SaveToFile(@"c:\temp\generatortest\test2\blaat.opsnapshot", out error); } if(!string.IsNullOrEmpty(error)) { Console.WriteLine("Save error: {0}", error); } (the Console.WriteLine doesn't do anything in a windows forms application, but you get the idea). ORM Profiler uses named pipes: the interceptor (referenced and initialized in your application, the application to profile) sends data over the named pipe to a listener, which when receiving a piece of data begins reading it, asynchronically, and when properly read, it will signal observers that new data has arrived so they can store it in a repository. In this case, the snapshot will be the observer and will store the data in its own repository. The reason the above code doesn't work in windows forms is because windows forms is a wrapper around Win32 and its WM_* message based system. Named pipes in .NET are wrappers around Windows named pipes which also work with WM_* messages. Even though we use BeginRead() on the named pipe (which spawns a thread to read the data from the named pipe), nothing is received by the named pipe in the windows forms application, because it doesn't handle the WM_* messages in its message queue till after the method is over, as the message pump of a windows forms application is handled by the only thread of the windows forms application, so it will handle WM_* messages when the application idles. The fix is easy though: add Application.DoEvents(); right before snapshot.Stop(). Application.DoEvents() forces the windows forms application to process all WM_* messages in its message queue at that moment: all messages for the named pipe are then handled, the .NET code of the named pipe wrapper will react on that and the whole process will complete as if nothing happened. It's not that simple to just say 'why didn't you use a worker thread to create the snapshot here?', because a thread doesn't get its own message pump: the messages would still be posted to the window's message pump. A hidden form would create its own message pump, so the additional thread should also create a window to get the WM_* messages of the named pipe posted to a different message pump than the one of the main window. This WM_* messages pain is not something you want to be confronted with when using .NET and its libraries. Unfortunately, the way they're implemented, a lot of APIs are leaky abstractions, they bleed the characteristics of the OS objects they hide away through to the .NET code. Be aware of that fact when using them :)

    Read the article

  • Understanding Collabnet&rsquo;s LDAP binding

    - by Robert May
    We want to use both subversion usernames and passwords as well as Active Directory for our authentication on our Collabnet subversion server. This has proven to be more of a challenge than we thought, mostly because Collabnet’s documentation is pretty poor. To supplement that documentation, I add my own. The first thing to understand is that the attribute that you specify in the LDAP Login Attribute ONLY applies to lookups done for the user.  It does NOT apply to the LDAP Bind DN field.  Second, know that the debug logs (error is the one you want) don’t give you debug information for the bind DN, just the login attempts.  Third, by default, Active Directory does not allow anonymous binds, so you MUST put in a user that has the authority to query the Active Directory ldap. Because of these items, the values to set in those fields can be somewhat confusing.  You’ll want to have ADSI Edit handy (I also used ldp, which is installed by default on server 2008), since ADSI Edit can help you find stuff in your active directory.  Be careful, you can also break stuff. Here’s what should go into those fields. LDAP Security Level:  Should be set to None LDAP Server Host:  Should be set to the full name of a domain controller in your domain.  For example, dc.mydomain.com LDAP Server Port:  Should be set to 3268.  The default port of 389 will only query that specific server, not the global catalog.  By setting it to 3268, the global catalog will be queried, which is probably what you want. LDAP Base DN:  Should be set to the location where you want the search for users to begin.  By default, the search scope is set to sub, so all child organizational units below this setting will be searched.  In my case, I had created an OU specifically for users for group policies.  My value ended up being:  OU=MyOu,DC=domain,DC=org.   However, if you’re pointing it to the default Users folder, you may end up with something like CN=Users,DC=domain,DC=org (or com or whatever).  Again, use ADSI edit and use the Distinguished Name that it shows. LDAP Bind DN:  This needs to be the Distinguished Name of the user that you’re going to use for binding (i.e. the user you’ll be impersonating) for doing queries.  In my case, it ended up being CN=svn svn,OU=MyOu,DC=domain,DC=org.  Why the double svn, you might ask?  That’s because the first and last name fields are set to svn and by default, the distinguished name is the first and last name fields!  That’s important.  Its NOT the username or account name!  Again, use ADSI edit, browse to the username you want to use, right click and select properties, and then search the attributes for the Distinguished Name.  Once you’ve found that, select it and click View and you can copy and paste that into this field. LDAP Bind Password:  This is the password for the account in the Bind DN LDAP login Attribute: sAMAccountName.  If you leave this blank, uid is used, which may not even be set.  This tells it to use the Account Name field that’s defined under the account tab for users in Active Directory Users and Computers.  Note that this attribute DOES NOT APPLY to the LDAP Bind DN.  You must use the full distinguished name of the bind DN.  This attribute allows users to type their username and password for authentication, rather than typing their distinguished name, which they probably don’t know. LDAP Search Scope:  Probably should stay at sub, but could be different depending on your situation. LDAP Filter:  I left mine blank, but you could provide one to limit what you want to see.  LDP would be helpful for determining what this is. LDAP Server Certificate Verification:  I left it checked, but didn’t try it without it being checked. Hopefully, this will save some others pain when trying to get Collabnet setup. Technorati Tags: Subversion,collabnet

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >