Search Results

Search found 5124 results on 205 pages for 'clean'.

Page 174/205 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • Yii 'limit' on related model's scope

    - by pethee
    I have a model called Guesses that has_many Comments. I'm making eager queries to this to then pass on as JSON as response to an API call. The relations are obviously set between the two models and they are correct(one2many <= belongs2) I added a scope to Comments called 'api' like this: public function scopes() { return array( 'api' => array( 'select' => 'id, comment, date', 'limit'=>3, 'order'=>'date DESC', 'together'=>true, ), ); } And I'm running the following one-liner query: $data = Guesses::model()->with('comments:api')->findAll(); The issue here is that when calling the 'api' scope using a with('relation'), the limit property simply doesn't apply. I added the 'together'=true there for another type of scope, plus I hear it might help. It doesn't make a difference. I don't need all the comments of all Guesses. I want the top 3 (or 5). I am also trying to keep the one-liner call intact and simple, manage everything through scopes, relations and parameterized functions so that the API call itself is clean and simple. Any advice?

    Read the article

  • How should a multi-threaded C application handle a failed malloc()?

    - by user294463
    A part of an application I'm working on is a simple pthread-based server that communicates over a TCP/IP socket. I am writing it in C because it's going to be running in a memory constrained environment. My question is: what should the program do if one of the threads encounters a malloc() that returns NULL? Possibilities I've come up with so far: No special handling. Let malloc() return NULL and let it be dereferenced so that the whole thing segfaults. Exit immediately on a failed malloc(), by calling abort() or exit(-1). Assume that the environment will clean everything up. Jump out of the main event loop and attempt to pthread_join() all the threads, then shut down. The first option is obviously the easiest, but seems very wrong. The second one also seems wrong since I don't know exactly what will happen. The third option seems tempting except for two issues: first, all of the threads need not be joined back to the main thread under normal circumstances and second, in order to complete the thread execution, most of the remaining threads will have to call malloc() again anyway. What shall I do?

    Read the article

  • Update all but one result?

    - by Jack M.
    I'm trying to update a table to remove all but the first instance of a group. Basically, I have a table with vehicle data related to an insurance policy. Each policy should only have one power_unit. Everything else should be a towed unit. Unfortunately, a bug has been duplicating power units, and now I need to clean this up. There are ~10k records in the database, and ~4k of them have doubled up power units. The important bits of my table (call it test1 for now) are: +------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+---------+------+-----+---------+----------------+ | id | int(10) | NO | PRI | NULL | auto_increment | | policy_id | int(10) | NO | | NULL | | | power_unit | int(1) | NO | | 0 | | +------------+---------+------+-----+---------+----------------+ And some sample data: +----+-----------+------------+ | id | policy_id | power_unit | +----+-----------+------------+ | 1 | 1 | 1 | | 2 | 1 | 1 | | 3 | 1 | 1 | | 4 | 2 | 1 | | 5 | 2 | 1 | | 6 | 2 | 1 | | 7 | 4 | 1 | | 8 | 4 | 1 | | 9 | 4 | 1 | | 10 | 5 | 1 | | 11 | 5 | 1 | | 12 | 6 | 1 | +----+-----------+------------+ Basically I'd like to end up where policy_id 1 has only one power_unit=1. Same for policy_id 2, 3, 4, etc. For policy_id 6, nothing should change (there is only one entry, and it is a power_unit already). I don't know if this is possible, but it was an intriguing problem for me, so I thought you guys might find it the same.

    Read the article

  • Iterating Through N Level Children

    - by bobber205
    This seems like something neat that might be "built into" jQuery but I think it's still worth asking. I have a problem where that can easily be solved by iterating through all the children of a element. I've recently discovered I need to account for the cases where I would need to do a level or two deeper than the "1 level" (just calling .children() once) I am currently doing. jQuery.each(divToLookAt.children(), function(index, element) { //do stuff } ); This is what I'm current doing. To go a second layer deep, I run another loop after doing stuff code for each element. jQuery.each(divToLookAt.children(), function(index, element) { //do stuff jQuery.each(jQuery(element).children(), function(indexLevelTwo, elementLevelTwo) { //do stuff } ); } ); If I want to go yet another level deep, I have to do this all over again. This is clearly not good. I'd love to declare a "level" variable and then have it all take care of. Anyone have any ideas for a clean efficient jQueryish solution? Thanks!

    Read the article

  • Dealing with large directories in a checkout

    - by Eric
    I am trying to come up with a version control process for a web app that I work on. Currently, my major stumbling blocks are two directories that are huge (both over 4GB). Only a few people need to work on things within the huge directories; most people don't even need to see what's in them. Our directory structure looks something like: / --file.aspx --anotherFile.aspx --/coolThings ----coolThing.aspx --/bigFolder ----someHugeMovie.mov ----someHugeSound.mp3 --/anotherBigFolder ----... I'm sure you get the picture. It's hard to justify a checkout that has to pull down 8GB of data that's likely useless to a developer. I know, it's only once, but even once could be really frustrating for someone (and will make it harder for me to convince everyone to use source control). (Plus, clean checkouts will be painfully slow.) These folders do have to be available in the web application. What can I do? I've thought about separate repositories for the big folders. That way, you only download if you need it; but then how do I manage checking these out onto our development server? I've also thought about not trying to version control those folders: just update them directly on the web server... but I am not enamored of this idea. Is there some magic way to simply exclude directories from a checkout that I haven't found? (Pretty sure there is not.) Of course, there's always the option to just give up, bite the bullet, and accept downloading 8 useless GB. What say you? Have you encountered this problem before? How did you solve it?

    Read the article

  • What is Rails way to DRY up the controller pattern of verifying :id is for a valid object (else redirect to error page)

    - by jpwynn
    One of my controllers has close to 100 methods (eg routes) and nearly every one starts out the same code to redirect to an error page if the id param is invalid, followed by a similar check if the user that id doesn't belong in the user's account: def something @foo = Foo.find_by_guid(params[:id]) unless @foo @msg ||= { :title => 'No such page!', :desc => "There is no such page!" } render :action => "error" and return end unless @foo.owner_id == current_user.id @msg ||= { :title => 'Really?', :desc => "There is no such page." } render :action => "error" and return end What is the best way to DRY up that sort of page id and owner id validation, given the code is doing a render ... and return? What I don't want to do at this point is offload it to a blackbox roles and permissions library like CanCan... my goal is simply to have the in-app code to handle this be as clean as possible.

    Read the article

  • Is it wrong for a context (right click) menu be the only way a user can perform a certain task?

    - by Eric
    I'd like to know if it ever makes sense to provide some functionality in a piece of software that is only available to the user through a context (right click) menu. It seems that in most software I've worked with the right click menu is always used as a quick way to get to features that are otherwise available from other buttons or menus. Below is a screen shot of the UI I'm developing. The tree view on the right shows the user's library of catalogs. Users can create new catalogs, or add and remove existing catalogs to and from their library. Catalogs in their library can then be opened or closed, or set to read-only. The screen shot shows the context menu I've created for the browser. Some commands can be executed independently from any specific catalog (New, Add). Yet the other commands must be applied to a specifically selected catalog (Close, Open, Remove, ReadOnly, Refresh, Clean UP, Rename). Currently the "Catalog" menu at the top of the window looks identical to this context menu. Yet I think this may be confusing to the users as the tree view which shows the currently selected catalog may not always be visible. The user may have switched to the Search or Filters tab, or the left pane may be hidden entirely. However, I'm hesitant to change the UI so that the commands that depends on a specifically selected catalog are only available through the context menu.

    Read the article

  • What does the `new` keyword do

    - by Mike
    I'm following a Java tutorial online, trying to learn the language, and it's bouncing between two semantics for using arrays. long results[] = new long[3]; results[0] = 1; results[1] = 2; results[2] = 3; and: long results[] = {1, 2, 3}; The tutorial never really mentioned why it switched back and forth between the two so I searched a little on the topic. My current understanding is that the new operator is creating an object of "array of longs" type. What I do not understand is why do I want that, and what are the ramifications of that? Are there certain "array" specific methods that won't work on an array unless it's an "array object"? Is there anything that I can't do with an "array object" that I can do with a normal array? Does the Java VM have to do clean up on objects initialized with the new operator that it wouldn't normally have to do? I'm coming from C, so my Java terminology, may not be correct here, so please ask for clarification if something's not understandable.

    Read the article

  • Is there any way to output the actual array in c++

    - by user2511129
    So, I'm beginning C++, with a semi-adequate background of python. In python, you make a list/array like this: x = [1, 2, 3, 4, 5, 6, 7, 8, 9] Then, to print the list, with the square brackets included, all you do is: print x That would display this: [1, 2, 3, 4, 5, 6, 7, 8, 9] How would I do the exact same thing in c++, print the brackets and the elements, in an elegant/clean fashion? NOTE I don't want just the elements of the array, I want the whole array, like this: {1, 2, 3, 4, 5, 6, 7, 8, 9} When I use this code to try to print the array, this happens: input: #include <iostream> using namespace std; int main() { int anArray[9] = {1, 2, 3, 4, 5, 6, 7, 8, 9}; cout << anArray << endl; } The output is where in memory the array is stored in (I think this is so, correct me if I'm wrong): 0x28fedc As a sidenote, I don't know how to create an array with many different data types, such as integers, strings, and so on, so if someone can enlighten me, that'd be great! Thanks for answering my painstakingly obvious/noobish questions!

    Read the article

  • How to tell the Session to throw the error query[NHibernate]?

    - by xandy
    I made a test class against the repository methods shown below: public void AddFile<TFileType>(TFileType FileToAdd) where TFileType : File { try { _session.Save(FileToAdd); _session.Flush(); } catch (Exception e) { if (e.InnerException.Message.Contains("Violation of UNIQUE KEY")) throw new ArgumentException("Unique Name must be unique"); else throw e; } } public void RemoveFile(File FileToRemove) { _session.Delete(FileToRemove); _session.Flush(); } And the test class: try { Data.File crashFile = new Data.File(); crashFile.UniqueName = "NonUniqueFileNameTest"; crashFile.Extension = ".abc"; repo.AddFile(crashFile); Assert.Fail(); } catch (Exception e) { Assert.IsInstanceOfType(e, typeof(ArgumentException)); } // Clean up the file Data.File removeFile = repo.GetFiles().Where(f => f.UniqueName == "NonUniqueFileNameTest").FirstOrDefault(); repo.RemoveFile(removeFile); The test fails. When I step in to trace the problem, I found out that when I do the _session.flush() right after _session.delete(), it throws the exception, and if I look at the sql it does, it is actually submitting a "INSERT INTO" statement, which is exactly the sql that cause UNIQUE CONSTRAINT error. I tried to encapsulate both in transaction but still same problem happens. Anyone know the reason?

    Read the article

  • DB Strategy for inserting into a high read table (Sql Server)

    - by Tom
    Looking for strategies for a very large table with data maintained for reporting and historical purposes, a very small subset of that data is used in daily operations. Background: We have Visitor and Visits tables which are continuously updated by our consumer facing site. These tables contain information on every visit and visitor, including bots and crawlers, direct traffic that does not result in a conversion, etc. Our back end site allows management of the visitor's (leads) from the front end site. Most of the management occurs on a small subset of our visitors (visitors that become leads). The vast majority of the data in our visitor and visit tables is maintained only for a much smaller subset of user activity (basically reporting type functionality). This is NOT an indexing problem, we have done all we can with indexing and keeping our indexes clean, small, and not fragmented. ps: We do not currently have the budget or expertise for a data warehouse. The problem: We would like the system to be more responsive to our end users when they are querying, for instance, the list of their assigned leads. Currently the query is against a huge data set of mostly irrelevant data. I am pondering a few ideas. One involves new tables and a fairly major re-architecture, I'm not asking for help on that. The other involves creating redundant data, (for instance a Visitor_Archive and a Visitor_Small table) where the larger visitor and visit tables exist for inserts and history/reporting, the smaller visitor1 table would exist for managing leads, sending lead an email, need leads phone number, need my list of leads, etc.. The reason I am reaching out is that I would love opinions on the best way to keep the Visitor_Archive and the Visitor_Small tables in sync... Replication? Can I use replication to replicate only data with a certain column value (FooID = x) Any other strategies?

    Read the article

  • Python: Determine whether list of lists contains a defined sequence

    - by duhaime
    I have a list of sublists, and I want to see if any of the integer values from the first sublist plus one are contained in the second sublist. For all such values, I want to see if that value plus one is contained in the third sublist, and so on, proceeding in this fashion across all sublists. If there is a way of proceeding in this fashion from the first sublist to the last sublist, I wish to return True; otherwise I wish to return False. In other words, for each value in sublist one, for each "step" in a "walk" across all sublists read left to right, if that value + n (where n = number of steps taken) is contained in the current sublist, the function should return True; otherwise it should return False. (Sorry for the clumsy phrasing--I'm not sure how to clean up my language without using many more words.) Here's what I wrote. a = [ [1,3],[2,4],[3,5],[6],[7] ] def find_list_traversing_walk(l): for i in l[0]: index_position = 0 first_pass = 1 walking_current_path = 1 while walking_current_path == 1: if first_pass == 1: first_pass = 0 walking_value = i if walking_value+1 in l[index_position + 1]: index_position += 1 walking_value += 1 if index_position+1 == len(l): print "There is a walk across the sublists for initial value ", walking_value - index_position return True else: walking_current_path = 0 return False print find_list_traversing_walk(a) My question is: Have I overlooked something simple here, or will this function return True for all true positives and False for all true negatives? Are there easier ways to accomplish the intended task? I would be grateful for any feedback others can offer!

    Read the article

  • Installing WindowsAuthentication breaks authentication / web.config?

    - by Ian Quigley
    I have a clean Windows 2008 R2 box (on a VM) and have installed IIS 7.5 with default options. I then copied a website to it (from Windows 7, IIS 7) and after a little tweaking the website is working fine. The website is currently using and working with Anonymous Authentication. I have gone back to the Windows Components/Sever Manager, Roles - Security and ticked and installed Windows Authentication. When I check my server in IIS (top level above sites) - Authentication, I see Anonymous Authentication (enabled) ASP.NET Impersonation (disabled) Forms Authentication (disbaled) Windows Authentication (enabled) When I check my default website - Authentication, I see as above but "Retrieving status" and an error dialog saying There was an error while performing this operation. Details: Filename c:\inetpub\wwwroot\screwturnwiki\web.config Line number: 96 Error: This configuration section cannot be used in this path. This happens when the section is being locked at the parent level. Locking is either by default (overriderModeDefault="Deny"), or set explicity by a location tag with overrideMode="Deny" or the legacy allowOverride="False". I have tried hand editing the web.config with no success. (How to use locking in IIS7 Configuration) UN-installing Windows Authentication happily returns my site to working with Anonymous Authentication, and allows me to enable/disable these three options. FYI. I am using ScrewTurnWiki with the Active Directory plug in. It all works fine under Windows 7 IIS 7 locally (has been for months) Web.Config <system.webServer> (edit) <handlers> ( deleted removes/adds ) </handlers> <security> <authentication> 96: <windowsAuthentication enabled="true" useKernelMode="true"> <extendedProtection tokenChecking="Allow" /> <providers> <clear /> <add value="NTLM" /> <add value="Negotiate" /> </providers> </windowsAuthentication> </authentication> </security>

    Read the article

  • IIS 7.5 Error 1007

    - by darkdog
    I have a Windows 2008 R2 Virtual Server. I removed the "Default Web Site" using IIS Manager and now all my other sites now report an error saying they can't access the site (or something similar). I thought I could just remove the role, re-install IIS again and start with a clean slate. After I re-installed IIS it's now reporting the following error in the event log: The World Wide Web Publishing Service (WWW service) failed to register the URL prefix of "http://www.mash-guild.com:80:192.168.245.132/" for the website of "4". The required network connectivity may already be used. The site has been disabled. The data field contains the error number. This is the full version with the german error note (I'm from germany): Der WWW-Publishingdienst (WWW-Dienst) konnte das URL-Präfix "http://www.mash-guild.com:80:192.168.245.132/" für die Website "4" nicht registrieren. Die erforderliche Netzwerkverbindung wird möglicherweise bereits verwendet. Die Website wurde deaktiviert. Das Datenfeld enthält die Fehlernummer. Ereignis-XML: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-IIS-W3SVC" Guid="{05448E22-93DE-4A7A-BBA5-92E27486A8BE}" EventSourceName="W3SVC" /> <EventID Qualifiers="49152">1007</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2011-03-26T16:14:56.000000000Z" /> <EventRecordID>1435</EventRecordID> <Correlation /> <Execution ProcessID="0" ThreadID="0" /> <Channel>System</Channel> <Computer>WIN-DCJ8SN0QI5J</Computer> <Security /> </System> <EventData> <Data Name="UrlPrefix">http://www.mash-guild.com:80:192.168.245.132/</Data> <Data Name="SiteID">4</Data> <Binary>B7000780</Binary> </EventData> </Event> I would like to know if there is a way to fix this or just get back to the IIS + Server settings I had just after I installed windows 2008 ?

    Read the article

  • Unable to delete a file using bash script

    - by user3719091
    I'm having problems removing a file in a bash script. I saw the other post with the same problem but none of those solutions solved my problem. The bash script is an OP5 surveillance check and it calls an Expect process that saves a temporary file to the local drive which the bash script reads from. Once it has read the file and checked its status I would like to remove the temporary file. I'm pretty new to scripting so my script may not be as optimal as it can be. Either way it does the job except removing the file once it's done. I will post the entire code below: #!/bin/bash #GET FLAGS while getopts H:c:w: option do case "${option}" in H) HOSTADDRESS=${OPTARG};; c) CRITICAL=${OPTARG};; w) WARNING=${OPTARG};; esac done ./expect.vpn.check.sh $HOSTADDRESS #VARIABLES VPNCount=$(grep -o '[0-9]\+' $HOSTADDRESS.op5.vpn.results) # Check if the temporary results file exists if [ -f $HOSTADDRESS.op5.vpn.results ] then # If the file exist, Print "File Found" message echo Temporary results file exist. Analyze results. else # If the file does NOT exist, print "File NOT Found" message and send message to OP5 echo Temporary results file does NOT exist. Unable to analyze. # Exit with status Critical (exit code 2) exit 2 fi if [[ "$VPNCount" > $CRITICAL ]] then # If the amount of tunnels exceeds the critical threshold, echo out a warning message and current threshold and send warning to OP5 echo "The amount of VPN tunnels exceeds the critical threshold - ($VPNCount)" # Exit with status Critical (exit code 2) exit 2 elif [[ "$VPNCount" > $WARNING ]] then # If the amount of tunnels exceeds the warning threshold, echo out a warning message and current threshold and send warning to OP5 echo "The amount of VPN tunnels exceeds the warning threshold - ($VPNCount)" # Exit with status Warning (exit code 1) exit 1 else # The amount of tunnels do not exceed the warning threshold. # Print an OK message echo OK - $VPNCount # Exit with status OK exit 0 fi #Clean up temporary files. rm -f $HOSTADDRESS.op5.vpn.results I have tried the following solutions: Create a separate variable called TempFile that specifies the file. And specify that in the rm command. I tried creating another if statement similar to the one I use to verify that file exist and then rm the filename. I tried adding the complete name of the file (no variables, just plain text of the file) I can: Remove the file using the full name in both a separate script and directly in the CLI. Is there something in my script that locks the file that prevents me from removing it? I'm not sure what to try next. Thanks in advance!

    Read the article

  • Failed MDADM Array With Ext.4 Partition - "e2fsck: unable to set superblock flags on /dev/md0"

    - by Matthew Hodgkins
    Had a power failure and now my mdadm array is having problems. sudo mdadm -D /dev/md0 [hodge@hodge-fs ~]$ sudo mdadm -D /dev/md0 /dev/md0: Version : 0.90 Creation Time : Sun Apr 25 01:39:25 2010 Raid Level : raid5 Array Size : 8790815232 (8383.57 GiB 9001.79 GB) Used Dev Size : 1465135872 (1397.26 GiB 1500.30 GB) Raid Devices : 7 Total Devices : 7 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Sat Aug 7 19:10:28 2010 State : clean, degraded, recovering Active Devices : 6 Working Devices : 7 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 128K Rebuild Status : 10% complete UUID : 44a8f730:b9bea6ea:3a28392c:12b22235 (local to host hodge-fs) Events : 0.1307608 Number Major Minor RaidDevice State 0 8 81 0 active sync /dev/sdf1 1 8 97 1 active sync /dev/sdg1 2 8 113 2 active sync /dev/sdh1 3 8 65 3 active sync /dev/sde1 4 8 49 4 active sync /dev/sdd1 7 8 33 5 spare rebuilding /dev/sdc1 6 8 16 6 active sync /dev/sdb sudo mount -a [hodge@hodge-fs ~]$ sudo mount -a mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so sudo fsck.ext4 /dev/md0 [hodge@hodge-fs ~]$ sudo fsck.ext4 /dev/md0 e2fsck 1.41.12 (17-May-2010) fsck.ext4: Group descriptors look bad... trying backup blocks... /dev/md0: recovering journal fsck.ext4: unable to set superblock flags on /dev/md0 sudo dumpe2fs /dev/md0 | grep -i superblock [hodge@hodge-fs ~]$ sudo dumpe2fs /dev/md0 | grep -i superblock dumpe2fs 1.41.12 (17-May-2010) Primary superblock at 0, Group descriptors at 1-524 Backup superblock at 32768, Group descriptors at 32769-33292 Backup superblock at 98304, Group descriptors at 98305-98828 Backup superblock at 163840, Group descriptors at 163841-164364 Backup superblock at 229376, Group descriptors at 229377-229900 Backup superblock at 294912, Group descriptors at 294913-295436 Backup superblock at 819200, Group descriptors at 819201-819724 Backup superblock at 884736, Group descriptors at 884737-885260 Backup superblock at 1605632, Group descriptors at 1605633-1606156 Backup superblock at 2654208, Group descriptors at 2654209-2654732 Backup superblock at 4096000, Group descriptors at 4096001-4096524 Backup superblock at 7962624, Group descriptors at 7962625-7963148 Backup superblock at 11239424, Group descriptors at 11239425-11239948 Backup superblock at 20480000, Group descriptors at 20480001-20480524 Backup superblock at 23887872, Group descriptors at 23887873-23888396 Backup superblock at 71663616, Group descriptors at 71663617-71664140 Backup superblock at 78675968, Group descriptors at 78675969-78676492 Backup superblock at 102400000, Group descriptors at 102400001-102400524 Backup superblock at 214990848, Group descriptors at 214990849-214991372 Backup superblock at 512000000, Group descriptors at 512000001-512000524 Backup superblock at 550731776, Group descriptors at 550731777-550732300 Backup superblock at 644972544, Group descriptors at 644972545-644973068 Backup superblock at 1934917632, Group descriptors at 1934917633-1934918156 sudo e2fsck -b 32768 /dev/md0 [hodge@hodge-fs ~]$ sudo e2fsck -b 32768 /dev/md0 e2fsck 1.41.12 (17-May-2010) /dev/md0: recovering journal e2fsck: unable to set superblock flags on /dev/md0 sudo dmesg | tail [hodge@hodge-fs ~]$ sudo dmesg | tail EXT4-fs (md0): ext4_check_descriptors: Checksum for group 0 failed (59837!=29115) EXT4-fs (md0): group descriptors corrupted! EXT4-fs (md0): ext4_check_descriptors: Checksum for group 0 failed (59837!=29115) EXT4-fs (md0): group descriptors corrupted! Please Help!!!

    Read the article

  • SQL 2000: Intermittent Error 7399 with OLE DB Provider for Microsoft Jet

    - by Tim Lara
    I am using SQL Server 2000 on Windows Server 2003 SP2 and have set up a linked server to point at an Access 97 database using the OLE DB Provider 4.0 for Microsoft Jet. The problem I am having sounds almost exactly like the one described in this Microsoft KB article, except that the error I am getting is intermittent: http://support.microsoft.com/kb/814398 The SQL Server is running under the Local System account (which I don't have authority to change), and the Access 97 .mdb file that the linked server points to is on a Win XP Pro machine on the same LAN as the SQL Server machine, inside of a shared folder with permissions set to "Everyone" and "Full Control". Now, if the linked server connection never worked, it would make more sense that the problem is merely a permissions issue with the Local System account as the KB article above suggests, but the maddening thing is that sometimes the connection works just fine. When it fails, the error message is always the same: Error 7399: OLE DB provider 'Microsoft.Jet.OLEDB.4.0' reported an error. [OLE/DB provider returned message: Unspecified error] OLE DB error trace [OLE/DB Provider 'Microsoft.Jet.OLEDB.4.0' IDBInitialize::Initialize returned 0x80004005: ]. Also, not only does the linked server setup occasionally work just fine on this one particular SQL Server, what is supposed to be exactly the same setup on 25 other servers works just fine EVERY TIME! Obviously, something in the non-working setup must not be exactly the same, but I'm having trouble figuring out where to look for the differences since the error message SQL Server returns is so vague. I know our sysadmins have had numerous issues with Active Directory replication across our domain, so my best guess is that there is some sort of odd group policy corruption going on, but I thought I'd ask here to see if I might be overlooking something more straightforward. Any ideas on how to further isolate the error would be greatly appreciated! For the record, here is a list of things I've already tried: Rebooting the SQL Server machine. Fixes the issue temporarily, then the error returns within a minute or two of startup. (This is why I suspect a rogue group policy that is slow to apply fouling things up.) Importing all database objects from the Access 97 mdb into a new, clean mdb file. Makes no difference. Moving the Access 97 mdb file to a local directory on the SQL Server machine instead of accessing it via a share on the Win XP Pro LAN machine. This works, but does not solve the problem because the mdb needs to be on the client machine for performance reasons and the ability to work "stand alone". Plus, the same shared folder access works fine on all other servers / clients on my network. Compared all the SQL Server, Windows Server, etc versions to a known working setup and everything appears to be the same.

    Read the article

  • SQL 2000: Intermittent Error 7399 with OLE DB Provider for Microsoft Jet

    - by Tim Lara
    I am using SQL Server 2000 on Windows Server 2003 SP2 and have set up a linked server to point at an Access 97 database using the OLE DB Provider 4.0 for Microsoft Jet. The problem I am having sounds almost exactly like the one described in this Microsoft KB article, except that the error I am getting is intermittent: http://support.microsoft.com/kb/814398 The SQL Server is running under the Local System account (which I don't have authority to change), and the Access 97 .mdb file that the linked server points to is on a Win XP Pro machine on the same LAN as the SQL Server machine, inside of a shared folder with permissions set to "Everyone" and "Full Control". Now, if the linked server connection never worked, it would make more sense that the problem is merely a permissions issue with the Local System account as the KB article above suggests, but the maddening thing is that sometimes the connection works just fine. When it fails, the error message is always the same: Error 7399: OLE DB provider 'Microsoft.Jet.OLEDB.4.0' reported an error. [OLE/DB provider returned message: Unspecified error] OLE DB error trace [OLE/DB Provider 'Microsoft.Jet.OLEDB.4.0' IDBInitialize::Initialize returned 0x80004005: ]. Also, not only does the linked server setup occasionally work just fine on this one particular SQL Server, what is supposed to be exactly the same setup on 25 other servers works just fine EVERY TIME! Obviously, something in the non-working setup must not be exactly the same, but I'm having trouble figuring out where to look for the differences since the error message SQL Server returns is so vague. I know our sysadmins have had numerous issues with Active Directory replication across our domain, so my best guess is that there is some sort of odd group policy corruption going on, but I thought I'd ask here to see if I might be overlooking something more straightforward. Any ideas on how to further isolate the error would be greatly appreciated! For the record, here is a list of things I've already tried: Rebooting the SQL Server machine. Fixes the issue temporarily, then the error returns within a minute or two of startup. (This is why I suspect a rogue group policy that is slow to apply fouling things up.) Importing all database objects from the Access 97 mdb into a new, clean mdb file. Makes no difference. Moving the Access 97 mdb file to a local directory on the SQL Server machine instead of accessing it via a share on the Win XP Pro LAN machine. This works, but does not solve the problem because the mdb needs to be on the client machine for performance reasons and the ability to work "stand alone". Plus, the same shared folder access works fine on all other servers / clients on my network. Compared all the SQL Server, Windows Server, etc versions to a known working setup and everything appears to be the same.

    Read the article

  • Redhat | Openssl installation error

    - by MMRUSer
    make -f objs/Makefile make[1]: Entering directory `/root/fuse-ssh/nginx-0.7.65' cd /usr/bin/openssl \ && make clean \ && ./config --prefix=/usr/bin/openssl/.openssl no-shared no-threads \ && make \ && make install /bin/sh: line 0: cd: /usr/bin/openssl: Not a directory make[1]: *** [/usr/bin/openssl/.openssl/include/openssl/ssl.h] Error 1 make[1]: Leaving directory `/root/fuse-ssh/nginx-0.7.65' make: *** [build] Error 2 where's the actual location of openssl, there are several different places in my system.. How to solve this issue. rpm -ql openssl /usr/bin/openssl /usr/lib64/openssl /usr/lib64/openssl/engines /usr/lib64/openssl/engines/lib4758cca.so /usr/lib64/openssl/engines/libaep.so /usr/lib64/openssl/engines/libatalla.so /usr/lib64/openssl/engines/libchil.so /usr/lib64/openssl/engines/libcswift.so /usr/lib64/openssl/engines/libgmp.so /usr/lib64/openssl/engines/libnuron.so /usr/lib64/openssl/engines/libsureware.so /usr/lib64/openssl/engines/libubsec.so /usr/share/doc/openssl-0.9.8e /usr/share/doc/openssl-0.9.8e/CHANGES /usr/share/doc/openssl-0.9.8e/FAQ /usr/share/doc/openssl-0.9.8e/INSTALL /usr/share/doc/openssl-0.9.8e/LICENSE /usr/share/doc/openssl-0.9.8e/NEWS /usr/share/doc/openssl-0.9.8e/README /usr/share/doc/openssl-0.9.8e/README.FIPS /usr/share/doc/openssl-0.9.8e/c-indentation.el /usr/share/doc/openssl-0.9.8e/openssl.txt /usr/share/doc/openssl-0.9.8e/openssl_button.gif /usr/share/doc/openssl-0.9.8e/openssl_button.html /usr/share/doc/openssl-0.9.8e/ssleay.txt /usr/bin/openssl /usr/lib/openssl /usr/lib/openssl/engines /usr/lib/openssl/engines/lib4758cca.so /usr/lib/openssl/engines/libaep.so /usr/lib/openssl/engines/libatalla.so /usr/lib/openssl/engines/libchil.so /usr/lib/openssl/engines/libcswift.so /usr/lib/openssl/engines/libgmp.so /usr/lib/openssl/engines/libnuron.so /usr/lib/openssl/engines/libsureware.so /usr/lib/openssl/engines/libubsec.so /usr/share/doc/openssl-0.9.8e /usr/share/doc/openssl-0.9.8e/CHANGES /usr/share/doc/openssl-0.9.8e/FAQ /usr/share/doc/openssl-0.9.8e/INSTALL /usr/share/doc/openssl-0.9.8e/LICENSE /usr/share/doc/openssl-0.9.8e/NEWS /usr/share/doc/openssl-0.9.8e/README /usr/share/doc/openssl-0.9.8e/README.FIPS /usr/share/doc/openssl-0.9.8e/c-indentation.el /usr/share/doc/openssl-0.9.8e/openssl.txt /usr/share/doc/openssl-0.9.8e/openssl_button.gif /usr/share/doc/openssl-0.9.8e/openssl_button.html /usr/share/doc/openssl-0.9.8e/ssleay.txt These are the places.. Thanks.

    Read the article

  • 5.5.0 smtp;554 transaction failed spam message not queued

    - by Miguel
    Some users are trying to send email to certain domains using Exchange Server 2003, but the message is always is rejected and the following message is shown: 5.5.0 smtp;554 Transaction Failed Spam Message not queued The IP is not in a black list (checked using http://whatismyipaddress.com/blacklist-check and is clean - not listed). The emails were checked using using smtpdiag ("a troubleshooting tool designed to work directly on a Windows server with IIS/SMTP service enabled or with Exchange Server installed") and the connection using port 25 is ok. Also, an nslookup with set type=ptr shows (names and IP changed, "" means I typed something): C:\Documents and Settings\administrator>nslookup Default Server: publicdns.isp.net Address: 10.10.10.10 > server publicdns.isp.net Default Server: publicdns.isp.net Address: 10.10.10.10 > set type=ptr >mydomain.com Server: publicdns.isp.net Address: 10.10.10.10 mydomain.com primary name server = publicdns.isp.net responsible mail addr = root.isp.net serial = 2011061301 refresh = 10800 (3 hours) retry = 3600 (1 hour) expire = 604800 (7 days) default TTL = 86400 (1 day) > 20.21.22.23 Server: publicdns.isp.net Address: 10.10.10.10 23.22.21.20.in-addr.arpa name = mail.mydomain.com 20.21.in-addr.arpa nameserver = publicdns.isp.net 20.21.in-addr.arpa nameserver = publicdns2.isp.net publicdns2.isp.net internet address = 10.10.10.11 publicdns.isp.net internet address = 10.10.10.10 Server: publicdns.isp.net Address: 10.10.10.10 23.22.21.20.in-addr.arpa name = mail.mydomain.com 20.21.in-addr.arpa nameserver = publicdns.isp.net 20.21.in-addr.arpa nameserver = publicdns2.isp.net publicdns2.isp.net internet address = 10.10.10.11 publicdns.isp.net internet address = 10.10.10.10 > set type=mx > mydomain.com Server: publicdns.isp.net Address: 10.10.10.10 mydomain.com MX preference = 10, mail exchanger = mail.mydomain.com mydomain.com nameserver = publicdns.isp.net mydomain.com nameserver = publicdns2.isp.net mail.mydomain.com internet address = 20.21.22.23 publicdns2.isp.net internet address = 10.10.10.11 publicdns.isp.net internet address = 10.10.10.10 > set type=a > mydomain.com Server: publicdns.isp.net Address: 10.10.10.10 Nombre: mydomain.com Address: 20.21.22.23 When I test the spf record with http://www.mxtoolbox.com it shows: TXT mydomain.com 24 hrs v=spf1 a mx ptr ip4:20.21.22.23 mx:mail.mydomain.com -all Any clues of what's happening here?

    Read the article

  • Using Diskpart in a PowerShell script won't allow script to reuse drive letter

    - by Kyle
    I built a script that mounts (attach) a VHD using Diskpart, cleans out some system files and then unmounts (detach) it. It uses a foreach loop and is suppose to clean multiple VHD using the same drive letter. However, after the 1st VHD it fails. I also noticed that when I try to manually attach a VHD with diskpart, diskpart succeeds, the Disk Manager shows the disk with the correct drive letter, but within the same PoSH instance I can not connect (set-location) to that drive. If I do a manual diskpart when I 1st open PoSH I can attach and detach all I want and I get the drive letter every time. Is there something I need to do to reset diskpart in the script? Here's a snippet of the script I'm using. function Mount-VHD { [CmdletBinding()] param ( [Parameter(Position=0,Mandatory=$true,ValueFromPipeline=$false)] [string]$Path, [Parameter(Position=1,Mandatory=$false,ValueFromPipeline=$false)] [string]$DL, [string]$DiskpartScript = "$env:SystemDrive\DiskpartScript.txt", [switch]$Rescan ) begin { function InvokeDiskpart { Diskpart.exe /s $DiskpartScript } ## Validate Operating System Version ## if (Get-WmiObject win32_OperatingSystem -Filter "Version < '6.1'") {throw "The script operation requires at least Windows 7 or Windows Server 2008 R2."} } process{ ## Diskpart Script Content ## Here-String statement purposefully not indented ## @" $(if ($Rescan) {'Rescan'}) Select VDisk File="$Path" `nAttach VDisk Exit "@ | Out-File -FilePath $DiskpartScript -Encoding ASCII -Force InvokeDiskpart Start-Sleep -Seconds 3 @" Select VDisk File="$Path"`nSelect partition 1 `nAssign Letter="$DL" Exit "@ | Out-File -FilePath $DiskpartScript -Encoding ASCII -Force InvokeDiskpart } end { Remove-Item -Path $DiskpartScript -Force ; "" Write-Host "The VHD ""$Path"" has been successfully mounted." ; "" } } function Dismount-VHD { [CmdletBinding()] param ( [Parameter(Position=0,Mandatory=$true,ValueFromPipeline=$false)] [string]$Path, [switch]$Remove, [switch]$NoConfirm, [string]$DiskpartScript = "$env:SystemDrive\DiskpartScript.txt", [switch]$Rescan ) begin { function InvokeDiskpart { Diskpart.exe /s $DiskpartScript } function RemoveVHD { switch ($NoConfirm) { $false { ## Prompt for confirmation to delete the VHD file ## "" ; Write-Warning "Are you sure you want to delete the file ""$Path""?" $Prompt = Read-Host "Type ""YES"" to continue or anything else to break" if ($Prompt -ceq 'YES') { Remove-Item -Path $Path -Force "" ; Write-Host "VHD ""$Path"" deleted!" ; "" } else { "" ; Write-Host "Script terminated without deleting the VHD file." ; "" } } $true { ## Confirmation prompt suppressed ## Remove-Item -Path $Path -Force "" ; Write-Host "VHD ""$Path"" deleted!" ; "" } } } ## Validate Operating System Version ## if (Get-WmiObject win32_OperatingSystem -Filter "Version < '6.1'") {throw "The script operation requires at least Windows 7 or Windows Server 2008 R2."} } process{ ## DiskPart Script Content ## Here-String statement purposefully not indented ## @" $(if ($Rescan) {'Rescan'}) Select VDisk File="$Path"`nDetach VDisk Exit "@ | Out-File -FilePath $DiskpartScript -Encoding ASCII -Force InvokeDiskpart Start-Sleep -Seconds 10 } end { if ($Remove) {RemoveVHD} Remove-Item -Path $DiskpartScript -Force ; "" } }

    Read the article

  • Windows Question: RunOnce/Second Boot Issues

    - by Greg
    I am attempting to create a Windows XP SP3 image that will run my application on Second Boot. Here is the intended workflow. 1) Run Image Prep Utility (I wrote) on windows to add my runonce entries and clean a few things up. 2) Reboot to ghost, make image file. 3) Package into my ISO and distribute. 4) System will be imaged by user. 5) On first boot, I have about 5 things that run, one of which includes a driver updater (I wrote) for my own specific devices. 6) One of the entries inside of HKCU/../runonce is a reg file, which adds another key to HKLM/../runonce. This is how second boot is acquired. 7) As a result of the driver updater, user is prompted to reboot. 8) My application is then launched from HKLM/../runonce on second boot. This workflow works perfectly, except for a select few legacy systems that contain devices that cause the add hardware wizard to pop up. When the add hardware wizard pops up is when I begin to see problems. It's important to note, that if I manually inspect the registry after the add hardware wizard pops up, it appears as I would expect, with all the first boot scripts having run, and it's sitting in a state I would correctly expect it to be in for a second boot scenario. The problem comes when I click next on the add hardware wizard, it seems to re-run the single entry I've added, and re-executes the runonce scripts. (only one script now as it's already executed and cleared out the initial entries). This causes my application to open as if it were a second boot, only when next is clicked on the add hardware wizard. If I click cancel, and reboot, then it also works as expected. I don't care as much about other solutions, because I could design a system that doesn't fully rely on Microsoft's registry. I simply can't find any information as to WHY this is happening. I believe this is some type of Microsoft issue that's presenting itself as a result of an overstretched image that's expected to support too many legacy platforms, but any help that can be provided would be appreciated. Thanks,

    Read the article

  • missing files after reassemble of RAID-5

    - by Kris_R
    I had to open my file-server's housing on Sunday to replace a faulty fan. What I didn't see was that one of the sata-cables was not properly connected. The 1st thing I did after a reboot was a check of the RAID status and it showed immediately that one drive is missing. Till this moment the device was not used (however it was mounted, so I'm not 100% sure that system did nothing). I stopped md0 and re-plugged the cable: mdadm --stop /dev/md0 poweroff After another reboot I checked the removed drive: mdadm --examine /dev/sdd1 ... Checksum : 3276bc1d - correct Events : 315782 Layout : left-symmetric Chunk Size : 32K Number Major Minor RaidDevice State this 0 8 49 0 active sync /dev/sdd1 0 0 8 49 0 active sync /dev/sdd1 1 1 8 65 1 active sync /dev/sde1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 17 3 active sync /dev/sdb1 I was a bit surprised that it was shown as active (even if earlier mdadm said, that this device was removed from array) and its checksum was OK. I recreated RAID with: mdadm --assemble /dev/md0 --scan The command mdadm --detail /dev/md0 showed that all drives were running and system was in "clean" state. I mounted the device md0 and then came hic-cup. I wanted to work on one of the last files that I had been using before all the situation and it was not there. In another place I missed actually all files from the directory where I was working. As far as I can see most of the files that are older than a few days are intact but some newer ones are missing. Now the big question: what would be your advice? Is there a way to get these data? I thought about removing the drive that was earlier labeled by mdadm and rebuild array with another empty HDD. I've found that after re-assemble the "broken" drive has another label (changed from sdd to sdb). Can this have influence on rebuilt process? If yes, how to reassemble the array properly? I'm sure the SATA-cables are connected still in the same order to the controller. p.s. Please no advises like "restore from backup". I'm doing back-ups on Sunday's night and this happened in the late afternoon, so backup is not really options for me. p.s.s. I asked this question on Unix&Linux but no answer came up during last two days. I'm getting quite anxious. Sorry for duplicating if any of you reading the other forum.

    Read the article

  • iPhone 3G refuses to transfer purchased apps to iTunes

    - by andynormancx
    My iPhone 3G refuses to transfer purchased apps to iTunes. This is causing me major problems with syncing. Whenever I attempt to transfer apps from the iPhone to iTunes it goes through the motions, but never actually transfers anything. It displays the various apps in the info area at the top of the screen, but the progress bar never advances. In comparison when I sync other iPhones, using the same install of iTunes, the progress bar advances and apps are transferred. The same also happens on clean installs of iTunes on other computers, it seems to be my iPhone that is the common factor. I have tried restoring the phone from a backup, which makes no difference. This started happening months ago and the phone has since been upgraded to 3.0 and 3.1, but the problem still persists. Originally it was just a minor irritation, but I made and attempt to fix it which has made things worse. I deleted all the apps from with iTunes and then did "Transfer purchases" in the hope that it might fix something. It didn't fix anything. Also, I cannot now sync at all. If I do sync iTunes now does "transferring purchases", fails to transfer and then deletes all the apps (and data) from my iPhone. It also means I can't sync music, podcasts or anything else. I can't sync anything else, because I can't temporarily turn off app syncing because then iTunes warns that the apps on the iPhone will be deleted. I also tried de-authorising and re-authorising. What can I do to get app syncing working again ? P.S. I have considered deleting all the apps and reinstalling them one by one, in the hope that it will fix the problem. However I don't really want to embark on doing that for 55+ apps and re-entering login details etc for the apps that need them, especially as I might then find out it didn't solve the problem. Update: The latest update to iTunes 9 has improved things in one key aspect. If I let a sync run to completion iTunes no longer deletes all the apps from my phone. So I can now sync all my other data, even if I still can't sync my apps. Resolved: See my answer to the question for how I finally resolved the problem.

    Read the article

  • How to improve this bash shell script for turning hardlinks into symlinks?

    - by MountainX
    This shell script is mostly the work of other people. It has gone through several iterations, and I have tweaked it slightly while also trying to fully understand how it works. I think I understand it now, but I don't have confidence to significantly alter it on my own and risk losing data when I run the altered version. So I would appreciate some expert guidance on how to improve this script. The changes I am seeking are: make it even more robust to any strange file names, if possible. It currently handles spaces in file names, but not newlines. I can live with that (because I try to find any file names with newlines and get rid of them). make it more intelligent about which file gets retained as the actual inode content and which file(s) become sym links. I would like to be able to choose to retain the file that is either a) the shortest path, b) the longest path or c) has the filename with the most alpha characters (which will probably be the most descriptive name). allow it to read the directories to process either from parameters passed in or from a file. optionally, write a long of all changes and/or all files not processed. Of all of these, #2 is the most important for me right now. I need to process some files with it and I need to improve the way it chooses which files to turn into symlinks. (I tried using things like the find option -depth without success.) Here's the current script: #!/bin/bash # clean up known problematic files first. ## find /home -type f -wholename '*Icon* ## *' -exec rm '{}' \; # Configure script environment # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ set -o nounset dir='/SOME/PATH/HERE/' # For each path which has multiple links # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # (except ones containing newline) last_inode= while IFS= read -r path_info do #echo "DEBUG: path_info: '$path_info'" inode=${path_info%%:*} path=${path_info#*:} if [[ $last_inode != $inode ]]; then last_inode=$inode path_to_keep=$path else printf "ln -s\t'$path_to_keep'\t'$path'\n" rm "$path" ln -s "$path_to_keep" "$path" fi done < <( find "$dir" -type f -links +1 ! -wholename '* *' -printf '%i:%p\n' | sort --field-separator=: ) # Warn about any excluded files # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ buf=$( find "$dir" -type f -links +1 -path '* *' ) if [[ $buf != '' ]]; then echo 'Some files not processed because their paths contained newline(s):'$'\n'"$buf" fi exit 0

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >