Search Results

Search found 1056 results on 43 pages for 'seperate'.

Page 38/43 | < Previous Page | 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Was Visual Studio 2008 or 2010 written to use multi cores?

    - by Erx_VB.NExT.Coder
    basically i want to know if the visual studio IDE and/or compiler in 2010 was written to make use of a multi core environment (i understand we can target multi core environments in 08 and 10, but that is not my question). i am trying to decide on if i should get a higher clock dual core or a lower clock quad core, as i want to try and figure out which processor will give me the absolute best possible experience with Visual Studio 2010 (ide and background compiler). if they are running the most important section (background compiler and other ide tasks) in one core, then the core will get cut off quicker if running a quad core, esp if background compiler is the heaviest task, i would imagine this would b e difficult to seperate in more then one process, so even if it uses multi cores you might still be better off with going for a higher clock cpu if the majority of the processing is still bound to occur in one core (ie the most significant part of the VS environment). i am a vb programmer, they've made great performance improvements in beta 2, congrats, but i would love to be able to use VS seamlessly... anyone have any ideas? thanks, erx

    Read the article

  • jQuery script works in Firefox but not in IE. Why am I not surprised?

    - by Ben Tew
    I'm working with the context of a CMS system and trying to turn seperate div's into tabs. You can see it at http://www.wtvynews4.com/test I've kludged together some code from a tutorial site. <script charset="utf-8" type="text/javascript"> jQuery(function() { //When page loads... $("div[ondblclick$='87119417']").attr("id", "87119417"); $("div[ondblclick$='87119482']").attr("id", "87119482"); $("div[ondblclick$='87119672']").attr("id", "87119672"); $("div[ondblclick$='87119727']").attr("id", "87119727"); $("div[ondblclick$='87119812']").attr("id", "87119812"); $("div[ondblclick$='87119417']").addClass("tab_content"); $("div[ondblclick$='87119482']").addClass("tab_content"); $("div[ondblclick$='87119672']").addClass("tab_content"); $("div[ondblclick$='87119727']").addClass("tab_content"); $("div[ondblclick$='87119812']").addClass("tab_content"); $(".tab_content").hide(); //Hide all content $("ul.morenewstabs li:first").addClass("active").show(); //Activate first tab $(".tab_content:first").show(); //Show first tab content //On Click Event $("ul.morenewstabs li").click(function() { $("ul.morenewstabs li").removeClass("active"); //Remove any "active" class $(this).addClass("active"); //Add "active" class to selected tab $(".tab_content").hide(); //Hide all tab content var activeTab = $(this).find("a").attr("href"); //Find the href attribute value to identify the active tab + content $(activeTab).show(); //Fade in the active ID content return false; }); }); </script> Everything works fine in Firefox but not IE. can you provide any assistance? When the page loads the attribute ID's and classes aren't assigned. I tried changing jQuery(function() { to $(document).ready(function() still no luck.

    Read the article

  • How to lazy process an xml documentwith hexpat?

    - by Florian
    In my search for a haskell library that can process large (300-1000mb) xml files i came across hexpat. There is an example in the Haskell Wiki that claims to -- Process document before handling error, so we get lazy processing. For testing purposes i have redirected the output to /dev/null and throw a 300mb file at it. Memory consumption kept rising until i had to kill the process. Now i removed the error handling from the process function: process :: String -> IO () process filename = do inputText <- L.readFile filename let (xml, mErr) = parse defaultParseOptions inputText :: (UNode String, Maybe XMLParseError) hFile <- openFile "/dev/null" WriteMode L.hPutStr hFile $ format xml hClose hFile return () As a result the function now uses constant memory. Why does the error handling result in massive memory consumption? As far as i understand xml and mErr are two seperate unevaluated thunks after the call to parse. Does format xml evaluate xml and build the evaluation tree of 'mErr'? If yes is there a way to handle the error while using constant memory? http://www.haskell.org/haskellwiki/Hexpat/

    Read the article

  • Creating mock Objects in PHP unit

    - by Mike
    Hi, I've searched but can't quite find what I'm looking for and the manual isn't much help in this respect. I'm fairly new to unit testing, so not sure if I'm on the right track at all. Anyway, onto the question. I have a class: <?php class testClass { public function doSomething($array_of_stuff) { return AnotherClass::returnRandomElement($array_of_stuff); } } ?> Now, clearly I want the AnotherClass::returnRandomElement($array_of_stuff); to return the same thing every time. My question is, in my unit test, how do I mockup this object? I've tried adding the AnotherClass to the top of the test file, but when I want to test AnotherClass I get the "Cannot redeclare class" error. I think I understand factory classes, but I'm not sure how I would apply that in this instance. Would I need to write an entirely seperate AnotherClass class which contained test data and then use the Factory class to load that instead of the real AnotherClass? Or is using the Factory pattern just a red herring. I tried this: $RedirectUtils_stub = $this->getMockForAbstractClass('RedirectUtils'); $o1 = new stdClass(); $o1->id = 2; $o1->test_id = 2; $o1->weight = 60; $o1->data = "http://www.google.com/?ffdfd=fdfdfdfd?route=1"; $RedirectUtils_stub->expects($this->any()) ->method('chooseRandomRoot') ->will($this->returnValue($o1)); $RedirectUtils_stub->expects($this->any()) ->method('decodeQueryString') ->will($this->returnValue(array())); in the setUp() function, but these stubs are ignored and I can't work out whether it's something I'm doing wrong, or the way I'm accessing the AnotherClass methods. Help! This is driving me nuts.

    Read the article

  • Javascript/Jquery pop-up window in Asp.Net MVC4

    - by Mark
    Below I have a "button" (just a span with an icon) that creates a pop-up view of a div in my application to allow users to compare information in seperate windows. However, I get and Asp.Net Error as follows: **Server Error in '/' Application. The resource cannot be found. Requested URL: /Home/[object Object]** Does anyone have an Idea of why this is happending? Below is my code: <div class="module_actions"> <div class="actions"> <span class="icon-expand2 pop-out"></span> </div> </div> <script> $(document).ajaxSuccess(function () { var Clone = $(".pop-out").click(function () { $(this).parents(".module").clone().appendTo("#NewWindow"); }); $(".pop-out").click(function popitup(url) { LeftPosition = (screen.width) ? (screen.width - 400) / 1 : 0; TopPosition = (screen.height) ? (screen.height - 700) / 1 : 0; var sheight = (screen.height) * 0.5; var swidth = (screen.width) * 0.5; settings = 'height=' + sheight + ',width=' + swidth + ',top=' + TopPosition + ',left=' + LeftPosition + ',scrollbars=yes,resizable=yes,toolbar=no,status=no,menu=no, directories=no,titlebar=no,location=no,addressbar=no' newwindow = window.open(url, '/Index', settings); if (window.focus) { newwindow.focus() } return false; }); });

    Read the article

  • Detailstext in TableView from Coredata Iphone

    - by user554867
    Hey, that is my first question here at stackoverflow. Let me try to be specific as I can. The project is as follows: I am trying to parse an xml file in an iphone app, which works already. I am saving the parsed data into the CoreData of the Iphone. So far so good: I have two elements of the xml file which I want to show in the tableview as the text and the detailtext. Now the strange behaviour occurs: If I take the data of the Core Data and trying to visualize them in the tableview, then both elements are displayed in seperate cells and not as detailtext of the element. If I am just taking a constant string for the detailtext, then it works, but if I am trying to take for every cell a specific element from the Core Data then it is shown seperately in the tableview. I googled a lot, reading here and there. I dont know which code I should show because I dont have a clue where exactly the mistake could be. Maybe someone can answer that immediately because it is a stupid mistake somewhere, which is common. Thanks for your help.

    Read the article

  • Deleting first and last element of a linked list in C

    - by LuckySlevin
    struct person { int age; char name[100]; struct person *next; }; void delfirst(struct person **p)// For deleting the beginning { struct person *tmp,*m; m = (*p); tmp = (*p)->next; free(m); return; } void delend(struct person **p)// For deleting the end { struct person *tmp,*m; tmp=*p; while(tmp->next!=NULL) { tmp=tmp->next; } m->next=tmp; free(tmp); m->next = NULL; return; } I'm looking for two seperate functions to delete the first and last elements of a linked list. Here is what i tried. What do you suggest? Especially deleting first is so problematic for me.

    Read the article

  • Redirect To Another Site With Header Information Attached Javascript

    - by Nick LaMarca
    I am trying to make a client side click and redirect to another site with header information added, my client side code for the onclick is this: function selectApp(appGUID, userId ,embedUrl) { if(embedUrl==="") { var success = setAppGUID(appGUID); window.location.replace('AppDetail.aspx'); } else { $.ajax({ type: "POST", url: embedUrl, contentType: "text/html", beforeSend: function (xhr, settings) { xhr.setRequestHeader("UserId", userId); }, success: function (msg) { //go to slx window.location.replace(embedUrl); } }); } } And the server side code in "embedUrl" is protected void Page_Load(object sender, EventArgs e) { string isSet = (String)HttpContext.Current.Session["saveUserID"]; if (String.IsNullOrEmpty(isSet)) { NameValueCollection headers = base.Request.Headers; for (int i = 0; i < headers.Count; i++) { if (headers.GetKey(i).Equals("UserId")) { HttpContext.Current.Session["saveUserID"] = headers.Get(i); } } } else { TextBox1.Text = HttpContext.Current.Session["saveUserID"].ToString(); } } This seems to work but its not too elegant. Is there a way to redirect with header data? Without (what Im doing) Saving header info in a session var then doing a redirect in 2 seperate pieces.

    Read the article

  • Can I use my objects without fully populating them?

    - by LuckySlevin
    I have situation. I have to create a Sports Club system in JAVA. There should be a class your for keeping track of club name, president name and braches the club has. For each sports branch also there should be a class for keeping track of a list of players. Also each player should have a name, number, position and salary. So, I come up with this. Three seperate classes: public class Team { String clubName; String preName; Branch []branches; } public class Branch { Player[] players; } public class Player { String name; String pos; int salary; int number; } The problems are creating Branch[] in another class and same for the Player[]. Is there any simplier thing to do this? For example, I want to add info for only the club name, president name and branches of the club, in this situation, won't i have to enter players,names,salaries etc. since they are nested in each other. I hope i could be clear. For further questions you can ask.

    Read the article

  • How to structure code with 2 methods, one after another, which throw the same two exceptions?

    - by dotnetdev
    Hi, I have two methods, one called straight after another, which both throw the exact same 2 exceptions (IF an erroneous condition occurs, not stating that I'm getting exceptions). For this, should I write seperate try and catch blocks with the one statement in each try block and catch both exceptions (Both of which I can handle as I checked MSDN class library reference and there is something I can do, eg, re-open SqlConnection or run a query and not a stored proc which does not exist). So code like this: try { obj.Open(); } catch (SqlException) { // Take action here. } catch (InvalidOperationException) { // Take action here. } And likewise for the other method I call straight after. This seems like a very messy way of coding. The other way is to code with the exception variable (that is ommited as I am using AOP to log the exception details, using a class-level attribute). Doing this, this could aid me in finding out which method caused an exception and then taking action accordingly. Is this the best approach or is there another best practise altogether? I also assume that, as only these two methods are thrown, I do not need to catch Exception as that would be for an exception I cannot handle (causes way out of my control). Thanks

    Read the article

  • Basic Application Organization + Publishing (.NET 4.0)

    - by keynesiancross
    Hi all, I'm trying to figure out the best way to keep my program organized. Currently I have many class files in one project file, but some of these classes do things that are very different, and some I would like to expose to other applications in the future. One thought I had to organizing my application was to create multiple project files, with one "Main" project, which would interact with all the other projects and their relevant classes as needed. Does this make sense? I was wondering if anyone had any suggestions in regards to using multiple project files in one solution (and how do you create something like this?), and if it makes sense to have multiple namespaces in one solution... Cheers ----Edit Below---- Sorry, my fault. Currently my program is all in one console project. Within this project I have several classes, some of which basically launch a BackgroundWorker and run an endless loop pulling data. The BackgroundWorker then passes this data back to the main business logic as needed. I'm hoping to seperate this data pull material (including the background worker material) into one project file, and the rest of the business logic into another project file. The projects will have to pass objects between eachother though (the data to the main business logic, and the business logic will pass startup parameteres to the dataPull project)... Hopefully this adds a bit more detail.

    Read the article

  • Joomla Site Templates: Architecture Advise

    - by Vincent
    Our client provided us with html templates to turn into a Joomla template, problem is their designs are not Joomla Template friendly where a lot of the html design are not consistent with structure. Currently the only solution we have is applying a template structure pattern that fits the most amount of their design and have seperate joomla templates to take care of the ones that doesn't fit. We have the generic Joomla Template configured with different positions for each div and assign each article to its respective position in the template. Some articles though have menu modules within them so our solution is to split the article into two position and define positions for each menu module. Is this method better than defining module positions within an article content to render menus within an article? Is there a better way of showing articles in specific div positions than having each article be represented by a module to render in a specific div (position) in a template? Right now our current way of rendering an article(s) content to a specific position is to create a module (moduleAsArticle) and define that module a position. Create An Article - Assign A Module To It (moduleAsArticle) - Define that module a position

    Read the article

  • Parse particular text from an XML string

    - by Dan Sewell
    Hi all, Im writing an app which reads an RSS feed and places items on a map. I need to read the lat and long numbers only from this string: http://www.xxxxxxxxxxxxxx.co.uk/map.aspx?isTrafficAlert=true&lat=53.647351&lon=-1.933506 .This is contained in link tags Im a bit of a programming noob but im writing this in C#/Silverlight using Linq to XML. Shold this text be extrated when parsing or after parsing and sent to a class to do this? Many thanks for your assistance. EDIT. Im going to try and do a regex on this this is where I need to integrate the regex somewhere in this code. I need to take the lat and long from the Link element and seperate it into two variables I can use (the results are part of a foreach loop that creates a list.) var events = from ev in document.Descendants("item") select new { Title = (ev.Element("title").Value), Description = (ev.Element("description").Value), Link = (ev.Element("link").Value), }; Question is im not quite ure where to put the regex (once I work out how to use the regex properly! :-) )

    Read the article

  • How important is it that models be consistent across project components?

    - by RonLugge
    I have a project with two components, a server-side component and a client-side component. For various reasons, the client-side device doesn't carry a fully copy of the database around. How important is it that my models have a 1:1 correlation between the two sides? And, to extend the question to my bigger concern, are there any time-bombs I'm going to run into down the line if they don't? I'm not talking about having different information on each side, but rather the way the information is encapsulated will vary. (Obviously, storage mechanisms will also vary) The server side will store each user, each review, each 'item' with seperate tables, and create links between them to gather data as necessary. The client side shouldn't have a complete user database, however, so rather than link against the user for gathering things like 'name', I'd store that on the review. In other words... --- Server Side --- Item: +id //Store stuff about the item User: +id +Name -Password Review: +id +itemId +rating +text +userId --- Device Side --- Item: +id +AverageRating Review: +id +rating +text +userId +name User: +id +Name //Stuff The basic idea is that certain 'critical' information gets moved one level 'up'. A user gets the list of 'items' relevant to their query, with certain review-orientation moved up (i. e. average rating). If they want more info, they query the detail view for the item, and the actual reviews get queried and added to the dataset (and displayed). If they query the actual review, the review gets queried and they pick up some additional user info along the way (maybe; I'm not sure if the user would have any use for any of the additional user information). My basic concern is that I don't wan't to glut the user's bandwidth or local storage with a huge variety of information that they just don't need, even if proper database normalizations suggests that information REALLY should be stored at a 'lower' level. I've phrased this as a fairly low-level conceptual issue because that's the level I'm trying to think / worry over, but if it matters I'm creating a PHP / MySQL server that provides data for a iOS / CoreData client.

    Read the article

  • QTime or QTimer wait for timeout

    - by user968243
    I'm writting a Qt application, and I have a four hour loop (in a seperate thread). In that four hour loop, I have to: do stuff with the serial port; wait a fixed period of time; do some more stuff with the serial port; wait an arbitrary amount of time. when 500ms have past, do more stuff; Go to 1. and repeat for four hours. Currently, my method of doing this is really bad, and crashes some computers. I have a whole bunch of code, but the following snippet basically shows the problem. The CPU goes to ~100% and eventually can crash the computer. void CRelayduinoUserControl::wait(int timeMS) { int curTime = loopTimer->elapsed(); while(loopTimer->elapsed() < curTime + timeMS); } I need to somehow wait for a particular amount of time to pass before continuing on with the program. Is there some function which will just wait for some arbitrary period of time while keeping all the timers going?

    Read the article

  • Show dialog while loading new screen

    - by darkdusky
    I have a front screen with a button which opens a second screen. The second screen can take a few seconds to load so I want to display a dialog while loading. My problem is the dialog does not display while loading second screen, but it displays when I return to first page from the second page. If I comment out the "startActivity" to open second page the dialog shows fine. I'm fairly new to android programming - I guess it has something to do with threads. //code snippet from inside onCreate: NewGame.setOnClickListener(new View.OnClickListener() { public void onClick(View arg0) { //does not get displayed before 2nd page opens showDialog(DIALOG2_KEY); //shows fine if next 2 lines commented out Intent i = new Intent(screen1.this, SudukuXL.class); startActivity(i); I've dealt with the dialog showing on returning to the front screen using onPause(). I've tried using threads to seperate the dialog from the startActivity but I've had no luck. Any help would be appreciated. I used code from Android examples to create dialog. I include below for reference: protected Dialog onCreateDialog(int id) { switch (id) { case DIALOG2_KEY: { ProgressDialog dialog = new ProgressDialog(this); dialog.setMessage("Loading..."); dialog.setIndeterminate(true); dialog.setCancelable(true); return dialog; } } return null; }

    Read the article

  • IE7 li with image and text issue - Text . Almost working.

    - by jmoore111
    I'm using an unordered list. With an image to the left and text to the right. I only noticed today that IE7 was displaying this so badly that it wasn't even acceptable. Woops for not realising sooner. I have the given the image a class and the text wrapped in a seperate div. <ul> <li><a href="domain"><img src="http://www.domain.com/img.jpg" alt="img class="footer-thumb" width="40px" height="40px" /></a> <div class="recent-post-content"> <p>day date time</p> <p><a title="Content title" href="http://www.domain.com/contentitle &raquo;</a></p> </div></li></ul> What's happeninig so far is that it is perfect in every browser except in IE7 where the images are exactly where they should be but the class "recent-post-content" is all pushed up one exactly one image ([li] height) above so the last image doesn't have any text beside it and the first image ([li]) has its "recent-post-content" above it. I guess it's something simple but after getting this far on my own I thought it best to try and get some advice to fix the last little bit. Any ideas? Thanks

    Read the article

  • [c++/STL] Selective iterator

    - by rubenvb
    FYI: no boost, yes it has this, I want to reinvent the wheel ;) Is there some form of a selective iterator (possible) in C++? What I want is to seperate strings like this: some:word{or other to a form like this: some : word { or other I can do that with two loops and find_first_of(":") and ("{") but this seems (very) inefficient to me. I thought that maybe there would be a way to create/define/write an iterator that would iterate over all these values with for_each. I fear this will have me writing a full-fledged custom way-too-complex iterator class for a std::string. So I thought maybe this would do: std::vector<size_t> list; size_t index = mystring.find(":"); while( index != std::string::npos ) { list.push_back(index); index = mystring.find(":", list.back()); } std::for_each(list.begin(), list.end(), addSpaces(mystring)); This looks messy to me, and I'm quite sure a more elegant way of doing this exists. But I can't think of it. Anyone have a bright idea? Thanks PS: I did not test the code posted, just a quick write-up of what I would try

    Read the article

  • Script dies, I'm not sure why

    - by Webnet
    I'm trying to parse a 6,000 line 500 KB file into an array so I can import the data into our system. The problem is that the script stops executing somewhere between lines 3000-4000. There are no breaks in the code, we use it on other imports. Any ideas on why this might be happening and what I can do to prevent it? /** * Takes a seperated value string and makes it an array * @param $delimiter string The delimiter to be seperated by, usually a comma or tab * @param $string string The string to seperate * @return array The resulting array */ public function svToArray ($delimiter, $string) { $x = 0; $rowList = array(); $splitContent = preg_split("#\n+#", trim($string)); foreach ($splitContent as $key => $value) { $newData = preg_split("#".$delimiter."#", $value); if ($x == 0) { $headerValues = array_values($newData); } else { $tempRow = array(); foreach ($newData as $rowColumnKey => $rowColumnValue) { $tempRow[$headerValues[$rowColumnKey]] = $rowColumnValue; } $rowList[] = $tempRow; } $x++; } return $rowList; }

    Read the article

  • Windows Server 2003 Terminal Server does not give out all available licenses (solved)

    - by Erwin Blonk
    I installed the Terminal Server role in Windows Server 2003 Standard 64-bits. Still, only 2 connections are allowed. The License Manager says that there are 10 Device CALs available, which is correct, and that none are given out. For good measure I let the server reboot, to no effect. Before this, there was another server (same Windows, except that it is 32 bits) active as a licensing server. I removed the role first and then then added it to the new server. I then removed the Terminal Server Licensing Server component off the old one and added it to the new one. After that, I added to licenses. When that didn't give the required result, I rebooted to new server. Still, the new server, with licenses and all, acts as if it has the 2 license RDP. The server are all stand-alone, there is no active directory been set up. Both servers are in different workgroups. Update (4/12/10): The server has changed the entries in the Terminal Server Licensing a few times. After installing the licenses it added an entry of which the exact phrasing I forgot but it was about temporary Windows 2003 device licenses. Later it added Windows Server 2003 - TS Per Device CAL. The temporary held 2 licenses (standard RDP licenses, I think) and the other 10. At some point, seemingly unrelated from the testing we did, it used a licenses from the new pool. This morning, 2 licenses were used from the pool of 10 and only 1 from the temporary/RDP pool (I wish I had screenshots to show, it changed every few hours oir so it seems). Although I had already activated the server over the internet, and re-activated it, I decided to go through the whole procedure by phone. Update 2 (4/12/10) The problem has been solved. It seems the activation over the web, while it said to have succeeded, did not work correctly. After activating by phone, it did work. What was different from the old setup and what put me on the wrong foot from that moment, was that I now need to create seperate user account because a session with one user account will be taken over by someone else when that account is used by that person. On the previous server, it was possible to open several sesions with the same account. We now use Per Device licenses, I'm not sure what was used before. Thanks all for the replies.

    Read the article

  • DKIM error: dkim=neutral (bad version) header.i=

    - by GBC
    Ive been struggling the last couple of hours with setting up DKIM on my Postfix/CentOS 5.3 server. It finally sends and signs the emails, but apparently Google still does not like it. The errors I'm getting are: dkim=neutral (bad version) [email protected] from googles "show original" interface. This is what my DKIM-signature header look like: v=1; a=rsa-sha1; c=simple/simple; d=mydomain.com.au; s=default; t=1267326852; bh=0wHpkjkf7ZEiP2VZXAse+46PC1c=; h=Date:From:Message-Id:To:Subject; b=IFBaqfXmFjEojWXI/WQk4OzqglNjBWYk3jlFC8sHLLRAcADj6ScX3bzd+No7zos6i KppG9ifwYmvrudgEF+n1VviBnel7vcVT6dg5cxOTu7y31kUApR59dRU5nPR/to0E9l dXMaBoYPG8edyiM+soXo7rYNtlzk+0wd5glgFP1I= Very appreciative of any suggestions as to how I can solve this problem! Btw, here is exactly how I installed dkim-milter in CentOS 5.3 for postfix, if anyone is interested (based on this guide): mkdir dkim-milter cd dkim-milter wget http://www.topdog-software.com/oss/dkim-milter/dkim-milter-2.8.3-1.x86_64.rpm ======S====== Newest version: http://www.topdog-software.com/oss/dkim-milter/ ======E====== rpm -Uvh dkim-milter-2.8.3-1.x86_64.rpm /usr/bin/dkim-genkey -r -d mydomain.com.au ======S====== add contents of default.txt to DNS as TXT _ssp._domainkey TXT dkim=unknown _adsp._domainkey TXT dkim=unknown default._domainkey TXT v=DKIM1; g=*; k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GWETBNiQKBgQC5KT1eN2lqCRQGDX+20I4liM2mktrtjWkV6mW9WX7q46cZAYgNrus53vgfl2z1Y/95mBv6Bx9WOS56OAVBQw62+ksXPT5cRUAUN9GkENPdOoPdpvrU1KdAMW5c3zmGOvEOa4jAlB4/wYTV5RkLq/1XLxXfTKNy58v+CKETLQS/eQIDAQAB ======E====== mv default.private default mkdir /etc/mail/dkim/keys/mydomain.com.au mv default /etc/mail/dkim/keys/mydomain.com.au chmod 600 /etc/mail/dkim/keys/mydomain.com.au/default chown dkim-milt.dkim-milt /etc/mail/dkim/keys/mydomain.com.au/default vim /etc/dkim-filter.conf ======S====== ADSPDiscard yes ADSPNoSuchDomain yes AllowSHA1Only no AlwaysAddARHeader no AutoRestart yes AutoRestartRate 10/1h BaseDirectory /var/run/dkim-milter Canonicalization simple/simple Domain mydomain.com.au #add all your domains here and seperate them with comma ExternalIgnoreList /etc/mail/dkim/trusted-hosts InternalHosts /etc/mail/dkim/trusted-hosts KeyList /etc/mail/dkim/keylist LocalADSP /etc/mail/dkim/local-adsp-rules Mode sv MTA MSA On-Default reject On-BadSignature reject On-DNSError tempfail On-InternalError accept On-NoSignature accept On-Security discard PidFile /var/run/dkim-milter/dkim-milter.pid QueryCache yes RemoveOldSignatures yes Selector default SignatureAlgorithm rsa-sha1 Socket inet:20209@localhost Syslog yes SyslogSuccess yes TemporaryDirectory /var/tmp UMask 022 UserID dkim-milt:dkim-milt X-Header yes ======E====== vim /etc/mail/dkim/keylist ======S====== *@mydomain.com.au:mydomain.com.au:/etc/mail/dkim/keys/mydomain.com.au/default ======E====== vim /etc/postfix/main.cf ======S====== Add: smtpd_milters = inet:localhost:20209 non_smtpd_milters = inet:localhost:20209 milter_protocol = 2 milter_default_action = accept ======E====== vim /etc/mail/dkim/trusted-hosts ======S====== localhost 127.0.0.1 ======E====== /etc/mail/local-host-names ======S====== localhost 127.0.0.1 ======E====== /sbin/chkconfig dkim-milter on /etc/init.d/dkim-milter start /etc/init.d/postfix restart

    Read the article

  • Why do I have multiple drives in my backup system image?

    - by bebop
    I have a drive which has 2 partitions. One is where the OS is installed, the other is a data (but not libraries) drive. When I try and create a backup using the built in tool, it wants to include both partitions in the system image. Why does it do this? If I move the os to a separate drive will I be able to back up just this data? Edit: To be more clear. I have 4 disks in the machine. 1 disc has 2 partitions. These are c: and e:, the other disks are d: f: and h:. The OS is installed on c: and libraries are stored on h:. The libraries are already backed up using crashplan, but I want to create a system image so I can easily restore the machine, if it either dies or if I get a SSD drive. When I choose backup (either through the wizard or if I open it through control panel) and check (or click) create a system image it automatically adds both c: and e: to the list of drives that will be backed up, and I cannot change this, the checkboxes to unselect are greyed out. I would like to know why it automatically adds e: to the list (but not h:, where the libraries are) and if I can change some setting so whatever files it has on e: that it thinks need to be backed up as part of the system image are moved to c:. How can I determine what they are? Is it because c: and e: are partitions of the same disk? If I move c: tro a different disk will that mean I only have to back up c:? Thanks Edit 2: I have viewed all files including hidden and system ones on both drives and it seems that I have a suspicous hidden e:\boot\ folder. I think that I might have installed the OS as a VHD at first then installed a seperate version straight on the disk, having dual boot for a while, then used EasyBCD to remove the VHD boot and file. Might this be what is causing my issue? How might I go about removing this? is it safe to just delete the boot folder?

    Read the article

  • Lock ups, crashing, transferring files using TrueCrypt with iSCSI

    - by Anthony
    I have looked into this error and it seems that it hasn't been discussed yet - or at least I can't find any information relating. I'm having issues transferring files, usually larger files over a couple of hundred MB. Here is the setup: QNAP 410 as iSCSI Target with multiple LUNs. (CRC is turned on (Data Digest and Header Digest) Server 2003 with iSCSI Initiator version 2.08 - build 3825 (I'm copying files from anothe machine to shares on Server 2003 = into TrueCrypt volume ergo onto the NAS) I have mounted the LUN and formatted it with TrueCrypt using NTFS (Full format, not a quick one). What happens is some files, mainly RAR/Compressed files, appear as if they copy but fail. I've tested this in a number of ways and can repeat the process every time. So I thought to check transfer over iSCSI without TrueCrypt in between, a plain NTFS format - no problem at all. So it would seem TrueCrypt is at least part of the problem here. I haven't tried copying directly from the server yet, I will try that. I also haven't tried it without CRC but fail to see how that would affect this. I will update with my findings later. In the meantime does anyone have any ideas as to what could be wrong? Thanks for your time. Update: I copied a set of files, the ones I was having issues with, to the server then from there I copied those into two places within the TrueCrypt volume (Mounted on the NAS). A seperate directory create in the root of the volume The same initial directory I was using in the first instance Both worked fine. So it now seems clear that this is a link between TrueCrypt, iSCSI and Windows Shares. I say this because I originally setup the whole system using TrueCrypt volume files, not iSCSI. I changed it as it didn't suit my requirements - day wasted as well. While I had this setup though I copied my entire file set to the volume files and all files copied without error - over the network, from a pc, to the server where TrueCrypt had the volume files mounted from the NAS. I didn't bother turning off CRC on the iSCSI system as I highly doubt that is the cause in light of this finding. So any ideas?

    Read the article

  • Deploying website content via Subversion

    - by Johann
    we have recently set up a new development infrastructure and process for one of our clients. This involves the strict use of subversion as a central source code repository. The svn repositories contains a seperate branch for code on the live system (/branches/live/). The repositories are use for PHP content (mainly Wordpress Blogs), but in future they may hold other asp code as well. Bonus points for a solutions which more or less in the same way with ASP code on Windows Server 2008 R2. We have two servers: one staging system and one live system. The staging system is updated regularly with the code of the trunk. The live system is update manually. Each webroot on the servers are working copy of either the trunk (staging system) or the live branch (live system). The current workflow is: Developing on the dev's box - commit into the trunk - auto-deploy on staging system - testing on the staging system - merging into /branches/live/ - manual deployment on live system. This works for one-way changes very well, however we have some troubles on every wordpress (or plugin) update: The WP update process removes the directories and unpack the archive of the new version. This removes the svn admin area as well, which produces a lot of errors. We could switch to SVN 1.7 with a single, global admin area, but this would only solve on part of the problem. Finally, we have done the update via the WP Gui, restored the svn admin area, added/removed the files and committed the changes to the trunk. After testing, we had to do basically the same thing on the live server (except the commit, we just reverted the changes and merged the new files from the staging system to the live system). I'm currently thinking of the following: The htdocs of each website is a svn export Each website has a svn working copy beside the htdocs directory a script which "replays" the changes in the wc from htdocs after an update in WP (rsync'ing the changed files to the working copy, rsync'ing new files and svn add them and finally svn delete the deleted files). The script would have to exclude some files (like wp-config.php, uploads/temp directories, etc.). Are there better ways to do this? Unfortunaly, a complete CI server is out of scope due to time and budget limitations.

    Read the article

  • Moving default web site to another drive

    - by Chadworthington
    I set the default location from c:\inetpub\wwwroot to d:\inetpub\wwwroot but when I access my .NET 4.0 site get this error: Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: Unrecognized attribute 'targetFramework'. Note that attribute names are case-sensitive. Source Error: Line 105: Set explicit="true" to force declaration of all variables. Line 106: --> Line 107: <compilation debug="true" strict="true" explicit="true" targetFramework="4.0"> Line 108: <assemblies> Line 109: <add assembly="System.Web.Extensions.Design, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> When I try to Manage the Basic Settings on the Site and click the "Test Settings" button, I see that I have a problem under "authorization:" The server is configured to use pass-through authentication with a built-in account to access the specified physical path. However, IIS Manager cannot verify whether the built-in account has access. Make sure that the application pool identity has Read access to the physical path. If this server is joined to a domain, and the application pool identity is NetworkService or LocalSystem, verify that <domain>\<computer_name>$ has Read access to the physical path. Then test these settings again. 1) Do I need to grant rights to IIS to the new folder? Which user? I thought it was something like IIS_USER or something similar but I cannot determine the correct name of the user. 2) Also, do I need to set the default version of the framework somewhere at the Default Site level or at the Virtual folder level? How is this done in IIS6, I am used to IIS5 or whatever came with XP Pro. 3) My original site had a subfolder under wwwroot called "aspnet_client." How was this cleated? I manually copied it to the corresponding new location. My app was using seperate ASP specific databases for storing session state and role info, if that is relevant. Thanks

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43  | Next Page >