Search Results

Search found 51790 results on 2072 pages for 'long running'.

Page 531/2072 | < Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >

  • How to make DD-WRT router's (configured like a repeater) devices be accessible on LAN? (i.e. integrate DHCP for both routers)

    - by Annonomus Penguin
    I have a D-Link DIR-600-A1 router running DD-WRT (using the 601's firmware: except for the model number, they are near identical). It has an Atheros chip, so there is no "repeater" option. You can bypass this by setting the main radio as a client to the main router, and adding a virtual radio configured as an AP. You can then set up the credentials for connecting to the main router and allowing devices to connect to the repeater/router. I have a few devices on my network: Ethernet computers Server with Samba running WiFi devices connected to the main router I then wanted to add a repeater. I have a couple of other things on the repeater: WiFi Computer Other WiFi devices. Anyway, I wanted to connect my WiFi computer to the share on my server via Samba. However, for some reason, my router treats the main router as WAN, not another device. I've tried disabling the SPI firewall: However, that doesn't work. I've tried pinging my WiFi computer from my server. However, I can ping my server from my WiFi computer. AFAIK, they are on the same subset, just using different IPs: the main one uses 192.168.0.x and the repeater uses 192.168.1.x (starting at 100 for some reason). It seems as I need to configure my router(s) to work together for DHCP. I noticed there was a "DHCP forwarder" option, but I have no idea what that would do. A quick note: for some reason (that's beyond me) my ISP disabled the capability to bridge a WiFi to ethernet connection with the router they provide (something about PPPoE or similar...). The service rep I talked to when I was having issues after I changed ISPs said that, but they couldn't explain exactly what they were "blocking." How can I get DD-WRT to not treat the client connection as WAN and the router to recognize the devices connected to the repeater?

    Read the article

  • Is it possible to sync Rhythmbox playlist to Ubuntu One Music or use a m3u file as Ubuntu One playlist?

    - by WeiSkiy
    I recently started trying Ubuntu One music streaming and now it feels good. However I have a problem here: I used to have several long playlists for myself in Rhythmbox (300-500 songs each playlist, in my favored order), and I really liked to play songs from the playlist instead of shuffling all songs. So, I wonder if I can upload my playlist to Ubuntu One music, or make a same playlist on Ubuntu One Music without dragging songs one by one. Is it possible to do this in Rhythmbox or do it by uploading some m3u playlist file? Or maybe some other method? Please suggest me any method that can "copy" my playlist to Ubuntu One music. Thanks very much!

    Read the article

  • ZFS - Impact of L2ARC cache device failure (Nexenta)

    - by ewwhite
    I have an HP ProLiant DL380 G7 server running as a NexentaStor storage unit. The server has 36GB RAM, 2 LSI 9211-8i SAS controllers (no SAS expanders), 2 SAS system drives, 12 SAS data drives, a hot-spare disk, an Intel X25-M L2ARC cache and a DDRdrive PCI ZIL accelerator. This system serves NFS to multiple VMWare hosts. I also have about 90-100GB of deduplicated data on the array. I've had two incidents where performance tanked suddenly, leaving the VM guests and Nexenta SSH/Web consoles inaccessible and requiring a full reboot of the array to restore functionality. In both cases, it was the Intel X-25M L2ARC SSD that failed or was "offlined". NexentaStor failed to alert me on the cache failure, however the general ZFS FMA alert was visible on the (unresponsive) console screen. The zpool status output showed: pool: vol1 state: ONLINE scan: scrub repaired 0 in 0h57m with 0 errors on Sat May 21 05:57:27 2011 config: NAME STATE READ WRITE CKSUM vol1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c8t5000C50031B94409d0 ONLINE 0 0 0 c9t5000C50031BBFE25d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c10t5000C50031D158FDd0 ONLINE 0 0 0 c11t5000C5002C823045d0 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 c12t5000C50031D91AD1d0 ONLINE 0 0 0 c2t5000C50031D911B9d0 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 c13t5000C50031BC293Dd0 ONLINE 0 0 0 c14t5000C50031BD208Dd0 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 c15t5000C50031BBF6F5d0 ONLINE 0 0 0 c16t5000C50031D8CFADd0 ONLINE 0 0 0 mirror-5 ONLINE 0 0 0 c17t5000C50031BC0E01d0 ONLINE 0 0 0 c18t5000C5002C7CCE41d0 ONLINE 0 0 0 logs c19t0d0 ONLINE 0 0 0 cache c6t5001517959467B45d0 FAULTED 2 542 0 too many errors spares c7t5000C50031CB43D9d0 AVAIL errors: No known data errors This did not trigger any alerts from within Nexenta. I was under the impression that an L2ARC failure would not impact the system. But in this case, it surely was the culprit. I've never seen any recommendations to RAID L2ARC. Removing the bad SSD entirely from the server got me back running, but I'm concerned about the impact of the device failure (and maybe the lack of notification from NexentaStor as well). Edit - What's the current best-choice SSD for L2ARC cache applications these days?

    Read the article

  • Computer specs for a large database

    - by SpeksETC
    What sort of computer specs (CPU, RAM, disk speed) should I use for running queries on a database of 200+ million records? The queries are for a research project, so there is only one "user" and only one query will be running at a time. I tried it on my own laptop with SQL Server with an i3 processor, 2GB RAM, 5400 RPM disk and a simple query didn't finish even after 8+ hours. I have an option to connect a SSD via eSata and upgrade to 4GB RAM, but not sure if this will be enough... Thanks! Edit: The database is about 25 GB and the indexes are not setup properly. When I tried to add an index, I let it run for about 8 hours and it still hadn't finished so I gave up. Should I have more patience :)? In general, the queries will run once in a while and its ok even if it takes a couple hours to complete.... Also, the queries will produce probably about 10 million records which I need to process using Stata/Matlab and I'm concerned that my current laptop is not strong enough, but unsure of the bottleneck....

    Read the article

  • How a batch file runs on a remote machine started by PSEXEC

    - by user38780
    I am having an issue running a Batch file on a remote machine suing PSEXEC. The file runs but does not run like it does when run through remote desktop. The batch runs a file which is a 32 bit application, which opens multiple 16bit applications, this should all run under one ntvdm.exe (In one Memory Space). Through remote desktop the batch file runs under the explorer process, and works correctly opening only one ntvdm.exe. Using PSEXEC the batch runs but not under the explorer process, a separate ntvdm.exe is open for each process. I found running the batch from explorer in PSEXEC works, but comes up with a "File Download - Security Warning" eg. psexec.exe" \compname -u username -p passowrd -s -d -i 0 explorer C:\Program.bat I want to be able to run the batch successfully without receiving warnings, it is a local warning and not a network share warning. Possible to recreate warning typing "explorer C:\windows\system32\cmd.exe" in Run I would like to know if anyone knows of a way to get PSEXEC to open the batch file to run as though it was started by explorer. Or a way of removing the local "File Download - Security Warning" Thanks

    Read the article

  • Where to go after Adobe Flex? [closed]

    - by jan halfar
    After this post http://blogs.adobe.com/flex/2011/11/your-questions-about-flex.html and especially this paragraph: ... Does Adobe recommend we use Flex or HTML5 for our enterprise application development? In the long-term, we believe HTML5 will be the best technology for enterprise application development. We also know that, currently, Flex has clear benefits for large-scale client projects typically associated with desktop application profiles. ... Make no mistake, the days of Flex are over. Thus a lot of people are asking themselves: Which technology(ies) will solve their and their customers problems in a future without flex? P.S.: Obviously the correct answer for adobe would have been " ...Since we believe, that HTML5 will be the best technology enterprise application development, we will ensure that it will be targeted by future releases of the Flex framework ..."

    Read the article

  • SSH garbling characters in vim/nano on remote server

    - by geerlingguy
    ... and it's driving me insane. Basically (this has been happening over the past couple months), I log into a few different CentOS servers (one Linode, another VPS, and a shared host to which I have shell access), running 5.5, 5.7, and 6, from my Mac running OS X Lion, using Terminal. Basically: $ ssh [email protected] [remote-host] $ nano somefile.txt Once I start editing the file, if I use the arrow keys to move around the cursor, or start deleting, then typing again, the cursor jumps around a bit, and if I save the file and reopen it, it's obvious that the cursor was, in fact, jumping all over the place on a line for no apparent reason. I end up getting things like "This is a neof text." When I had typed in (to the cursor-crazy editor) "This is a line of text." It's a big problem when it comes to editing configuration files, because I often have to edit one line, save and close, then reopen just to make sure that line is right... then edit another line... and it's getting quite annoying. I found Linode Lish Shell Vim and Nano rendering troubles: lines not appearing / cursor positions wrong, but I don't know if that relates much, since that's specifically referring to lish.

    Read the article

  • Can't successfully run Sharepoint Foundation 2010 first time configuration

    - by Robert Koritnik
    I'm trying to run the non-GUI version of configuration wizard using power shell because I would like to set config and admin database names. GUI wizard doesn't give you all possible options for configuration (but even though it doesn't do it either). I run this command: New-SPConfigurationDatabase -DatabaseName "Sharepoint2010Config" -DatabaseServer "developer.mydomain.pri" -AdministrationContentDatabaseName "Sharepoint2010Admin" -DatabaseCredentials (Get-Credential) -Passphrase (ConvertTo-SecureString "%h4r3p0int" -AsPlainText -Force) Of course all these are in the same line. I've broken them down into separate lines to make it easier to read. When I run this command I get this error: New-SPConfigurationDatabase : Cannot connect to database master at SQL server a t developer.mydomain.pri. The database might not exist, or the current user does not have permission to connect to it. At line:1 char:28 + New-SPConfigurationDatabase <<<< -DatabaseName "Sharepoint2010Config" -Datab aseServer "developer.mydomain.pri" -AdministrationContentDatabaseName "Sharepoint 2010Admin" -DatabaseCredentials (Get-Credential) -Passphrase (ConvertTo-SecureS tring "%h4r3p0int" -AsPlainText -Force) + CategoryInfo : InvalidData: (Microsoft.Share...urationDatabase: SPCmdletNewSPConfigurationDatabase) [New-SPConfigurationDatabase], SPExcep tion + FullyQualifiedErrorId : Microsoft.SharePoint.PowerShell.SPCmdletNewSPCon figurationDatabase I created two domain accounts and haven't added them to any group: SPF_DATABASE - database account SPF_ADMIN - farm account I'm running powershell console as domain administrator. I've tried to run SQL Management studio as domain admin and created a dummy database and it worked without a problem. I'm running: Windows 7 x64 on the machine where Sharepoint Foundation 2010 should be installed and also has preinstalled SQL Server 2008 R2 database Windows Server 2008 R2 Server Core is my domain controller that just serves domain features and nothing else I've installed Sharepoint according to MS guides http://msdn.microsoft.com/en-us/library/ee554869%28office.14%29.aspx installing all additional patches that are related to my configuration. Any ideas what should I do to make it work?

    Read the article

  • Compiling Mono on Fedora 7

    - by Gary
    Trying to install Mono from source on Fedora 7.. running # ./configure --prefix=/opt/mono works fine, but doing the make # make ; make install ends up with the following: Makefile:93: warning: overriding commands for target `csproj-local' ../build/executable.make:131: warning: ignoring old commands for target `csproj-local' make install-local make[6]: Entering directory `/opt/mono-2.6.4/mcs/mcs' Makefile:93: warning: overriding commands for target `csproj-local' ../build/executable.make:131: warning: ignoring old commands for target `csproj-local' MCS [basic] mcs.exe typemanager.cs(2047,40): error CS0103: The name `CultureInfo' does not exist in the context of `Mono.CSharp.TypeManager' Compilation failed: 1 error(s), 0 warnings make[6]: *** [../class/lib/basic/mcs.exe] Error 1 make[6]: Leaving directory `/opt/mono-2.6.4/mcs/mcs' make[5]: *** [do-install] Error 2 make[5]: Leaving directory `/opt/mono-2.6.4/mcs/mcs' make[4]: *** [install-recursive] Error 1 make[4]: Leaving directory `/opt/mono-2.6.4/mcs' make[3]: *** [profile-do--basic--install] Error 2 make[3]: Leaving directory `/opt/mono-2.6.4/mcs' make[2]: *** [profiles-do--install] Error 2 make[2]: Leaving directory `/opt/mono-2.6.4/mcs' make[1]: *** [install-exec] Error 2 make[1]: Leaving directory `/opt/mono-2.6.4/runtime' make: *** [install-recursive] Error 1 I've been following the instructions at http://ruakuu.blogspot.com/2008/06/installing-and-configuring-opensim-on.html. This is all in an effort to get OpenSimulator running.

    Read the article

  • yum error when installing memcached

    - by Jack
    Hi, trying to install memcached with "yum install memcached" and i'm getting all these errors which I have no idea how to solve. Setting up Install Process Resolving Dependencies -- Running transaction check --- Package memcached.x86_64 0:1.4.5-1.el5.rf set to be updated -- Processing Dependency: perl(AnyEvent) for package: memcached -- Processing Dependency: perl(AnyEvent::Socket) for package: memcached -- Processing Dependency: perl(AnyEvent::Handle) for package: memcached -- Processing Dependency: perl(YAML) for package: memcached -- Processing Dependency: perl(Term::ReadKey) for package: memcached -- Processing Dependency: libevent-1.1a.so.1()(64bit) for package: memcached -- Running transaction check --- Package compat-libevent-11a.x86_64 0:3.2.1-1.el5.rf set to be updated --- Package memcached.x86_64 0:1.4.5-1.el5.rf set to be updated -- Processing Dependency: perl(AnyEvent) for package: memcached -- Processing Dependency: perl(AnyEvent::Socket) for package: memcached -- Processing Dependency: perl(AnyEvent::Handle) for package: memcached -- Processing Dependency: perl(YAML) for package: memcached -- Processing Dependency: perl(Term::ReadKey) for package: memcached -- Finished Dependency Resolution memcached-1.4.5-1.el5.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: perl(AnyEvent::Socket) is needed by package memcached-1.4.5-1.el5.rf.x86_64 (rpmforge) memcached-1.4.5-1.el5.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: perl(AnyEvent) is needed by package memcached-1.4.5-1.el5.rf.x86_64 (rpmforge) memcached-1.4.5-1.el5.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: perl(AnyEvent::Handle) is needed by package memcached-1.4.5-1.el5.rf.x86_64 (rpmforge) memcached-1.4.5-1.el5.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: perl(YAML) is needed by package memcached-1.4.5-1.el5.rf.x86_64 (rpmforge) memcached-1.4.5-1.el5.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: perl(Term::ReadKey) is needed by package memcached-1.4.5-1.el5.rf.x86_64 (rpmforge) Packages skipped because of dependency problems: compat-libevent-11a-3.2.1-1.el5.rf.x86_64 from rpmforge memcached-1.4.5-1.el5.rf.x86_64 from rpmforge The perl modules that its complaining about are already installed. Any ideas?

    Read the article

  • Programming Pearls (2nd Edition) vs More Programming Pearls: Confessions of a Coder [closed]

    - by Geek
    I have been reading very good reviews of the books by Jon Bentley : Programming Pearls (2nd Edition) More Programming Pearls: Confessions of a Coder. I know that these books have been out there for a long time and I feel bad that I haven't read either one . But it is always better late than never . I understand that the second one was written after the first one . So are these two books complementary to each other ? Do the second one assume that the reader has read the first one ? For some one who haven't read either which one would you propose to read up first ?

    Read the article

  • black screen after waking up from suspend

    - by Daniel De Leon
    running 11.10 and on wake up after being suspended (for a long time or 2 seconds, doesn't matter) I am presented with a black screen that just has a purple bar (about the size of the top menu bar). As far as I can tell the system is unresponsive to anything outside of a manual restart by pressing the power button on the tower, after which it starts normally. using the most up to date graphics driver, also tried switching to a different version, nothing. just switched back to Linux and this is really giving me a headache, I would really like this to work. Any help would be greatly appreciated.

    Read the article

  • TechEd 2010 Thanks and Demos

    - by Adam Machanic
    Thank you to everyone who attended my three sessions at this year's TechEd show in New Orleans. I had a great time presenting and answering the really great questions posed by attendees. My sessions were: DAT317 T-SQL Power! The OVER Clause: Your Key to No-Sweat Problem Solving Have you ever stared at a convoluted requirement, unsure of where to begin and how to get there with T-SQL? Have you ever spent three days working on a long and complex query, wondering if there might be a better way? Good...(read more)

    Read the article

  • Importing XML into an AWS RDS instance

    - by RoyHB
    I'm trying to load some xml into an AWS RDS (mySql) instance. The xml looks like: (it's an xml dump of the ISO-3661 codes) <?xml version="1.0" encoding="UTF-8"?> <countries> <countries name="Afghanistan" alpha-2="AF" alpha-3="AFG" country-code="004" iso_3166-2="ISO 3166-2:AF" region-code="142" sub-region-code="034"/> <countries name="Åland Islands" alpha-2="AX" alpha-3="ALA" country-code="248" iso_3166-2="ISO 3166-2:AX" region-code="150" sub-region-code="154"/> <countries name="Albania" alpha-2="AL" alpha-3="ALB" country-code="008" iso_3166-2="ISO 3166-2:AL" region-code="150" sub-region-code="039"/> <countries name="Algeria" alpha-2="DZ" alpha-3="DZA" country-code="012" iso_3166-2="ISO 3166-2:DZ" region-code="002" sub-region-code="015"/> The command that I'm running is: LOAD XML LOCAL INFILE '/var/www/ISO-3166_SMS_Country_Codes.xml' INTO TABLE `ISO-3661-codes`(`name`,`alpha-2`,`alpha-3`,`country-code`,`region-code`,`sub-region-code`); The error message I get is: ERROR 1148 (42000): The used command is not allowed with this MySQL version The infile that is referenced exists, I've selected a database before running the command and I have appropriate privileges on the database. The column names in the database table exactly match the xml field names.

    Read the article

  • Digital Asset Management, iPhoto / Aperture server... alternative

    - by Sisyphus
    Afternoon, Clients, 10 : All Apples running either Leopard or Snow Leopard Server : Snow Leopard server, (and I have a old Dell Poweredge 650 at home running Gentoo 2.6, if anybody as a Linux solution). The situation: I work in small design company with 8 people, at present we are looking to consolidate all our image files onto one location, at present we each use our preferred single user DAM solution, be it, Adobe Bridge, iPhoto/Aperture (some don't bother at all) The filetypes commonly used are .psd, .pdf, .eps, .tiff, .jpg and RAW image files. Ideally what is needed: Centralised on one server, but allows us to search via spotlight (not essential, but would be nice) Include searchable metadata information such as date, location, title Open-source or as low cost as possibly Allow simultaneous users to import files So far, I have looked at a few open source DAM, systems, such as Razuna, Gallery (not strictly DAM), ResourceSpace, Notre-DAM, while these are brilliant and open-source, they don't integrate as smoothly with the Desktop as iPhoto and aperture. For iPhoto and aperture, I have tried creating a Shared library on the server (a tad laggy), and also using a drive with no permissions, put a library and letting each client read from it, however if they want to put images onto the library only, it's only supports one user at a time writing to the library... Any ideas what could fulfill our needs? Or is it time to bite the bullet for FinalCut Server? Thanks in advance.

    Read the article

  • external drive enclosure -> software RAID 5?

    - by memilanuk
    Hello all, I have two older PCs on my LAN posing as 'servers'... one running FreeNAS off a USB stick using three 500GB hdds in a ZFS RAID-Z pool serving as storage for the LAN and one running Debian Lenny with an 80GB drive used as a general purpose 'tinker' box that I can ssh into, etc. Problem is that the SMART report for one of those 500GB drives in the FreeNAS box is showing some pre-failure attributes, and the whole array is a little small anyways. Rather than simply replace one 500GB drive with another 500GB drive, and have no backup of the file server, I'd like to upgrade all the drives to 2TB ones - but I have no where to store that much data in the mean while. As such, I started looking at getting a 4-bay external drive enclosure with an eSATA card for the Debian box, with the hopes of creating a RAID5 + LVM setup using those drives and backing the data up to that external drive enclosure. After the backup is done, replace the drives in the FreeNAS box and rebuild the array there and mirror the data back. Then, I'd have both the primary storage (on the FreeNAS box) and a backup (which I don't have currently) using the external drive enclosure on the Debian box. My big question is... most of these external drive boxes seem to claim support for JBOD, RAID 0, 1, 10, 5, etc. - should I presume that is simply fake RAID like many commodity mobos have, and not really usable in Linux? In that case, with all the drives hanging off the one eSATA connection, will Linux (specifically Debian Squeeze, as I plan on upgrading that box here shortly) see all four drives, or just the first one? Will I be able to configure them in a RAID5 array as desired? Thanks, Monte

    Read the article

  • Analysing and measuring the performance of a .NET application (survey results)

    - by Laila
    Back in December last year, I asked myself: could it be that .NET developers think that you need three days and a PhD to do performance profiling on their code? What if developers are shunning profilers because they perceive them as too complex to use? If so, then what method do they use to measure and analyse the performance of their .NET applications? Do they even care about performance? So, a few weeks ago, I decided to get a 1-minute survey up and running in the hopes that some good, hard data would clear the matter up once and for all. I posted the survey on Simple Talk and got help from a few people to promote it. The survey consisted of 3 simple questions: Amazingly, 533 developers took the time to respond - which means I had enough data to get representative results! So before I go any further, I would like to thank all of you who contributed, because I now have some pretty good answers to the troubling questions I was asking myself. To thank you properly, I thought I would share some of the results with you. First of all, application performance is indeed important to most of you. In fact, performance is an intrinsic part of the development cycle for a good 40% of you, which is much higher than I had anticipated, I have to admit. (I know, "Have a little faith Laila!") When asked what tool you use to measure and analyse application performance, I found that nearly half of the respondents use logging statements, a third use performance counters, and 70% of respondents use a profiler of some sort (a 3rd party performance profilers, the CLR profiler or the Visual Studio profiler). The importance attributed to logging statements did surprise me a little. I am still not sure why somebody would go to the trouble of manually instrumenting code in order to measure its performance, instead of just using a profiler. I personally find the process of annotating code, calculating times from log files, and relating it all back to your source terrifyingly laborious. Not to mention that you then need to remember to turn it all off later! Even when you have logging in place throughout all your code anyway, you still have a fair amount of potentially error-prone calculation to sift through the results; in addition, you'll only get method-level rather than line-level timings, and you won't get timings from any framework or library methods you don't have source for. To top it all, we all know that bottlenecks are rarely where you would expect them to be, so you could be wasting time looking for a performance problem in the wrong place. On the other hand, profilers do all the work for you: they automatically collect the CPU and wall-clock timings, and present the results from method timing all the way down to individual lines of code. Maybe I'm missing a trick. I would love to know about the types of scenarios where you actively prefer to use logging statements. Finally, while a third of the respondents didn't have a strong opinion about code performance profilers, those who had an opinion thought that they were mainly complex to use and time consuming. Three respondents in particular summarised this perfectly: "sometimes, they are rather complex to use, adding an additional time-sink to the process of trying to resolve the existing problem". "they are simple to use, but the results are hard to understand" "Complex to find the more advanced things, easy to find some low hanging fruit". These results confirmed my suspicions: Profilers are seen to be designed for more advanced users who can use them effectively and make sense of the results. I found yet more interesting information when I started comparing samples of "developers for whom performance is an important part of the dev cycle", with those "to whom performance is only looked at in times of crisis", and "developers to whom performance is not important, as long as the app works". See the three graphs below. Sample of developers to whom performance is an important part of the dev cycle: Sample of developers to whom performance is important only in times of crisis: Sample of developers to whom performance is not important, as long as the app works: As you can see, there is a strong correlation between the usage of a profiler and the importance attributed to performance: indeed, the more important performance is to a development team, the more likely they are to use a profiler. In addition, developers to whom performance is an important part of the dev cycle have a higher tendency to use a much wider range of methods for performance measurement and analysis. And, unsurprisingly, the less important performance is, the less varied the methods of measurement are. So all in all, to come back to my random questions: .NET developers do care about performance. Those who care the most use a wider range of performance measurement methods than those who care less. But overall, logging statements, performance counters and third party performance profilers are the performance measurement methods of choice for most developers. Finally, although most of you find code profilers complex to use, those of you who care the most about performance tend to use profilers more than those of you to whom performance is not so important.

    Read the article

  • How do I host multiple independent, secured SharePoint sites (WSS 3.0) without using Active Director

    - by Kyle Noland
    I have a SharePoint site set up on one of my networks to service Active Directory users. To be clear, this is a Windows SharePoint Services 3.0 installation running on Windows Server 2003 Standard. It is not an option to upgrade the server or SharePoint version. Management would like to create several new sites, one for each of a handful of clients. These sites will be used like "dropboxes" or FTP sites so that my company can make large files available to outside contacts, and vice versa. Here are my requirements: I do not want to have to create Active Directory accounts for each external contact. If possible, I would like to store the external usernames and passwords in a database that I can write a small GUI for so that management can handle adding their own external contacts. Each client site must be sandboxed from each other and from my main company SharePoint site. I would like to keep everything running on port 80 and be able to access the sites as either clientname.mycompany.com or www.mycompany.com/clientname If anybody has ever done this I would really appreciate hearing about any lessons you learned and suggestions for how to set this up. Kyle

    Read the article

  • iSCSI, failover and XenServer

    - by jemmille
    I have an iSCSI fail over implementation setup so if one of my storage units fails the other takes over immediately (it also runs the NFS shares). When fail over occurs, volumes are exported, the IP is switched to the other machine and the targets are reconfigured. The fail over of the storage system itself works just fine. I use NexentaStor for my filer. When I do a test (manual) fail over of my storage the following occurs: Note: I run the admin VM's on NFS and customer based VM's on iSCSI All NFS based VM's remain up and working perfectly through the failover and after All VM 's running on iSCSI eventually report the following: An error about not being able to write to a particular block An error about journaling not working Then the file system goes RO To get the VM's working again I have to do the following: Force shutdown of the "broken" VM's. Detach the iSCSI SR Re-attach the iSCSI SR Boot the VM on a different server (5 in my pool) If I don't boot on a different server I get this error "Internal error: Failure("The VDI <uuid&gt; is already attached in RW mode; it can't be attached in RO mode!")" The only way I have found to fix that error is to reboot the entire server it was running on previously which is obviously a huge pain. Currently multipathing is NOT enabled (but can be and the same thing still occurs). I have edited much of the /etc/iscsid.conf file to work with the timeout settings but to no avail. In short, my storage fails over properly but XenServer does not keep the connection alive. As a thought, the error that shows up in #4 above might be the ultimate cause and fixing that would fix everything? Any help would be appreciated more than you know.

    Read the article

  • netlogon errors

    - by rorr
    I have two instances of mssql 2005 and am using CA XOSoft replication. The master is a failover cluster and the replica is a standalone server. They are all running Server 2003 sp2 x64. Same patch levels on all servers. This setup has worked great for several months until we recently restricted the RPC ports on both nodes of the master(5000 - 6000 using rpccfg.exe). We have to implement egress filtering, thus the limiting of the ports. We began receiving login errors for sql windows authentication and NETLOGON Event ID: 5719: This computer was not able to set up a secure session with a domain controller in domain due to the following: Not enough storage is available to process this command. This may lead to authentication problems. Make sure that this computer is connected to the network. If the problem persists, please contact your domain administrator. We also see group policies failing to update and cluster file shares go offline at the same time. The RPC ports were set back to default when we started seeing these problems and the servers rebooted, but the problems persist. The domain controllers are not showing any errors. Running dcdiag and netdiag shows everything is fine. We have noticed that the XOSoft service ws_rep.exe is using a lot of handles(8 - 9k), about the same number that sqlserver is using. As soon as xosoft replication is stopped the login errors cease and everything functions correctly. I have opened a ticket with CA for XOSoft, but I'm not sure that the problem is actually xosoft, but that it is the one bringing the problem to light. I'm looking for tips on debugging RPC problems. Specifically on limiting the ports and then reverting the changes.

    Read the article

  • Qmail Patching Makes me Nervous

    - by JM4
    We have a system running CentOS 5 with Plesk 8.6 and Qmail running. Our primary domain is hosted through Media Temple. When Plesk and Qmail are hosted on a single Dedicated Virtual server, it reads the primary server IP and domain and reports that when sending emails from the system. Our pages are written in PHP so we are using the mail() function. While our email goes out to everybody, several enterprise email domains reject our email because it shows a different originating IP (our primary server IP and domain) than the domain we list in the 'from' address. This is not modifiable. Every domain we own of course does have its own IP as well underneath our primary server IP. I have seen several places online that provide a patch, specifically - which allows Domain Binding: "DomainBindings -- For servers that host multiple domains or have multiple IP addresses assigned to them, it is sometimes useful (or important) to have qmail use a specific IP address for its outgoing mail. By default, qmail uses whatever address the OS chooses for all outbound connections. With this patch, you can specify which address to use. It uses a control file similar to smtproutes to specify the outbound IP address to use, based on the sender's domain (local copy) (pyropus.ca)" Qmail Link First off I do not have netqmail installed so I'll need to find another source, but also I am completely unfamiliar with applying patches to qmail. Will I lose email services if I patch? Is it a simple apply and use process? Will my existing email accounts and data be restored after the patch? I am very, very new to unix/linux so this does make me a bit nervous but I am the only person who can make the change and it is one our company "HAS" to have. Any ideas?

    Read the article

  • PHP/Linux File Permissions

    - by user1733435
    May I ask a question about file permission. I set up Ubuntu server where Apache got running. I have simple php upload form and able to upload file to /var/www/site/uploads as follows. sandbox@sandbox-virtual-machine:/var/www/site/uploads$ ll total 1736 drwxrwxrwx 2 www-data www-data 4096 Oct 18 02:53 ./ drwxrwxrwx 3 sandbox sandbox 4096 Oct 18 00:42 ../ -rw-r--r-- 1 www-data www-data 145998 Oct 18 02:53 3d wallpaper pic.jpg -rw-r--r-- 1 www-data www-data 166947 Oct 18 02:53 3D Wallpapers 9.jpg -rw-r--r-- 1 www-data www-data 1451489 Oct 18 02:53 6453_3d_landscape_hd_wallpapers_green.jpg Is there anyway to upload files and they show up as -rw-r--r-- 1 sandbox sandbox 145998 Oct 18 02:53 3d wallpaper pic.jpg -rw-r--r-- 1 sandbox sandbox 166947 Oct 18 02:53 3D Wallpapers 9.jpg -rw-r--r-- 1 sandbox sandbox 1451489 Oct 18 02:53 6453_3d_landscape_hd_wallpapers_green.jpg so that I could straight away feed them to waiting/running shell script. Right now waiting script(move,checksums,rename,resize,etc) unable to do anything to uploaded files with attributes of www-data. If I just do as local account, such as sandbox@sandbox-virtual-machine:/var/www/site/uploads$touch testfile then the script is able to run as I would like to. Any suggestion would be grateful,thanks in advance as well. Thanks for everyone giving help to me,that I was able to progress. Now I am close to getting solved and append the output sandbox@sandbox-virtual-machine:/var/www/site/uploads$ ll total 388 drwxrwxrwx 2 www-data www-data 4096 Oct 18 04:22 ./ drwxrwxrwx 3 sandbox sandbox 4096 Oct 18 04:17 ../ -rw-r--r-- 1 sandbox sandbox 166947 Oct 18 04:21 3D Wallpapers 9.jpg -rw-r--r-- 1 sandbox sandbox 219808 Oct 18 04:20 adafruit_pi.png -rw-rw-r-- 1 sandbox sandbox 0 Oct 18 04:22 test How may I set permission to uploaded files like 'test' only w difference in middle group. Such as adafruit_pi.png Vs test. Which statement shall I insert to php code,please?

    Read the article

  • The incomplete list of impolite WP7 dev requests

    In my previous list of WP7 user requests, I piled all of my user hopes and dreams for my new WP7 phone (delivery date: who the hell knows) onto the universe as a way to make good things happen. And all that’s fine, but I’m not just a user; like most of my readers, I’m also a developer and have a need to control my phone with code. I have a long list of applications I want to write and an even longer list of applications I want other developers to write for me.   Today at 1:30p...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • iPad and User Assistance

    - by ultan o'broin
    What possibilities does the iPad over for user assistance in the enterprise space? We will research the possibilities but I can see a number of possibilities already for remote workers who need access to trouble-shooting information on-site, implementers who need reference information and diagrams, business analysts or technical users accessing reports and dashboards for metrics or issues, functional users who need org charts and other data visualizations, and so on. It could also open up more possibilities for collaborative problem solving. User assistance content can take advantage of the device's superb display, graphics capability, connectivity, and long battery life. The possibility of opening up more innovative user assistance solutions (such as comics) is an exciting one for everyone in the UX space. Aligned to this possibility we need to research how users would use the device as they work.

    Read the article

  • Exitus Acta Probat: The Post-Processing Module

    - by Phil Factor
    Sometimes, one has to make certain ethical compromises to ensure the success of a corporate IT project. Exitus Acta Probat (literally 'the result validates the deeds' meaning that the ends justify the means)It was a while back, whilst working as a Technical Architect for a well-known international company, that I was given the task of designing the architecture of a rather specialized accounting system. We'd tried an off-the-shelf (OTS) Windows-based solution which crashed with dispiriting regularity, and didn't quite do what the business required. After a great deal of research and planning, we commissioned a Unux-based system that used X-terminals for the desktops of  the participating staff. X terminals are now obsolete, but were then hot stuff; stripped-down Unix workstations that provided client GUIs for networked applications long before the days of AJAX, Flash, Air and DHTML. I've never known a project go so smoothly: I'd been initially rather nervous about going the Unix route, believing then that  Unix programmers were excitable creatures who were prone to  indulge in role-play enactments of elves and wizards at the weekend, but the programmers I met from the company that did the work seemed to be rather donnish, earnest, people who quickly grasped our requirements and were faultlessly professional in their work.After thinking lofty thoughts for a while, there was considerable pummeling of keyboards by our suppliers, and a beautiful robust application was delivered to us ahead of dates.Soon, the department who had commissioned the work received shiny new X Terminals to replace their rather depressing lavatory-beige PCs. I modestly hung around as the application was commissioned and deployed to the department in order to receive the plaudits. They didn't come. Something was very wrong with the project. I couldn't put my finger on the problem, and the users weren't doing any more than desperately and futilely searching the application to find a fault with it.Many times in my life, I've come up against a predicament like this: The roll-out of an application goes wrong and you are hearing nothing that helps you to discern the cause but nit-*** noise. There is a limit to the emotional heat you can pack into a complaint about text being in the wrong font, or an input form being slightly cramped, but they tried their best. The answer is, of course, one that every IT executive should have tattooed prominently where they can read it in emergencies: In Vino Veritas (literally, 'in wine the truth', alcohol loosens the tongue. A roman proverb) It was time to slap the wallet and get the department down the pub with the tab in my name. It was an eye-watering investment, but hedged with an over-confident IT director who relished my discomfort. To cut a long story short, The real reason gushed out with the third round. We had deprived them of their PCs, which had been good for very little from the pure business perspective, but had provided them with many hours of happiness playing computer-based minesweeper and solitaire. There is no more agreeable way of passing away the interminable hours of wage-slavery than minesweeper or solitaire, and the employees had applauded the munificence of their employer who had provided them with the means to play it. I had, unthinkingly, deprived them of it.I held an emergency meeting with our suppliers the following day. I came over big with the notion that it was in their interests to provide a solution. They played it cool, probably knowing that it was my head on the block, not theirs. In the end, they came up with a compromise. they would temporarily descend from their lofty, cerebral stamping grounds  in order to write a server-based Minesweeper and Solitaire game for X Terminals, and install it in a concealed place within the system. We'd have to pay for it, though. I groaned. How could we do that? "Could we call it a 'post-processing module?" suggested their account executive.And so it came to pass. The application was a resounding success. Every now and then, the staff were able to indulge in some 'post-processing', with what turned out to be a very fine implementation of both minesweeper and solitaire. There were several refinements: A single click in a 'boss' button turned the games into what looked just like a financial spreadsheet.  They even threw in a multi-user version of Battleships. The extra payment for the post-processing module went through the change-control process without anyone untoward noticing, and peace once more descended. Only one thing niggles. Those games were good. Do they still survive, somewhere in a Linux library? If so, I'd like to claim a small part in their production.

    Read the article

< Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >