Search Results

Search found 5496 results on 220 pages for 'threaded comments'.

Page 107/220 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • How to avoid concurrent execution of a time-consuming task without blocking?

    - by Diego V
    I want to efficiently avoid concurrent execution of a time-consuming task in a heavily multi-threaded environment without making threads wait for a lock when another thread is already running the task. Instead, in that scenario, I want them to gracefully fail (i.e. skip its attempt to execute the task) as fast as possible. To illustrate the idea considerer this unsafe (has race condition!) code: private static boolean running = false; public void launchExpensiveTask() { if (running) return; // Do nothing running = true; try { runExpensiveTask(); } finally { running = false; } } I though about using a variation of Double-Checked Locking (consider that running is a primitive 32-bit field, hence atomic, it could work fine even for Java below 5 without the need of volatile). It could look like this: private static boolean running = false; public void launchExpensiveTask() { if (running) return; // Do nothing synchronized (ThisClass.class) { if (running) return; running = true; try { runExpensiveTask(); } finally { running = false; } } } Maybe I should also use a local copy of the field as well (not sure now, please tell me). But then I realized that anyway I will end with an inner synchronization block, that still could hold a thread with the right timing at monitor entrance until the original executor leaves the critical section (I know the odds usually are minimal but in this case we are thinking in several threads competing for this long-running resource). So, could you think in a better approach?

    Read the article

  • Hosting-why should I pay? [on hold]

    - by user196919
    Can I avoid, eliminate or neutralize propagation? Hours of research for a first-time unregistered domain returned the usual suspects ("Oneandblank", "GoBlanky", "BlankCow", "HostBlank-or" and others)all with 24-72hr or more. Would a roll-your-own, inhouse SHO Linux/Winserver box help? Thank you so much! edit; navigating SF for my first time I now notice the "professionals" air and feel to the comments. Although I feel justified for being here(armed with directly relevant, well-rounded knowledge/proficiency/experience)I apologize for diving in headlong without proper post etiquette or correct placement. I envy each of you and hope to gain my own inviolable foothold in the coming years.

    Read the article

  • threading in Python taking up too much CPU

    - by KevinShaffer
    I wrote a chat program and have a GUI running using Tkinter, and to go and check when new messages have arrived, I create a new thread so Tkinter keeps doing its thing without locking up while the new thread goes and grabs what I need and updates the Tkinter window. This however becomes a huge CPU hog, and my guess is that it has to do somehow with the fact that the Thread is started and never really released when the function is done. Here's the relevant code (it's ugly and not optimized at the moment, but it gets the job done, and itself does not use too much processing power, as when I run it not threaded, it doesn't take up much CPU but it locks up Tkinter) Note: This is inside of a class, hence the extra tab. def interim(self): threading.Thread(target=self.readLog).start() self.after(5000,self.interim) def readLog(self): print 'reading' try: length = len(str(self.readNumber)) f = open('chatlog'+str(myport),'r') temp = f.readline().replace('\n','') while (temp[:length] != str(self.readNumber)) or temp[0] == '<': temp = f.readline().replace('\n','') while temp: if temp[0] != '<': self.updateChat(temp[length:]) self.readNumber +=1 else: self.updateChat(temp) temp = f.readline().replace('\n','') f.close() Is there a way to better manage the threading so I don't consume 100% of the CPU very quickly?

    Read the article

  • C++ gdb GUI

    - by HappyDude
    Briefly: Does anyone know of a GUI for gdb that brings it on par or close to the feature set you get in the more recent version of Visual C++? In detail: As someone who has spent a lot of time programming in Windows, one of the larger stumbling blocks I've found whenever I have to code C++ in Linux is that debugging anything using commandline gdb takes me several times longer than it does in Visual Studio, and it does not seem to be getting better with practice. Some things are just easier or faster to express graphically. Specifically, I'm looking for a GUI that: Handles all the basics like stepping over & into code, watch variables and breakpoints Understands and can display the contents of complex & nested C++ data types Doesn't get confused by and preferably can intelligently step through templated code and data structures while displaying relevant information such as the parameter types Can handle threaded applications and switch between different threads to step through or view the state of Can handle attaching to an already-started process or reading a core dump, in addition to starting the program up in gdb If such a program does not exist, then I'd like to hear about experiences people have had with programs that meet at least some of the bullet points. Does anyone have any recommendations? Edit: Listing out the possibilities is great, and I'll take what I can get, but it would be even more helpful if you could include in your responses: (a) Whether or not you've actually used this GUI and if so, what positive/negative feedback you have about it. (b) If you know, which of the above-mentioned features are/aren't supported Lists are easy to come by, sites like this are great because you can get an idea of people's personal experiences with applications.

    Read the article

  • Sheet and thread memory problem

    - by Xident
    Hi Guys, recently I started a project which can export some precalculated Grafix/Audio to files, for after processing. All I was doing is to put a new Window (with progressindicator and an Abort Button) in my main xib and opened it using the following code: [NSApp beginSheet: REC_Sheet modalForWindow: MOTHER_WINDOW modalDelegate: self didEndSelector: nil contextInfo: nil]; NSModalSession session=[NSApp beginModalSessionForWindow:REC_Sheet]; RECISNOTDONE=YES; while (RECISNOTDONE) { if ([NSApp runModalSession:session]!=NSRunContinuesResponse) break; usleep(100); } [NSApp endModalSession:session]; A Background Thread (pthread) was started earlier, to actually perform the work and save all the targas/wave file. Which worked great, but after an amount of time, it turned out that the main thread was not responding anymore and my memory footprint raised unstoppable. I tried to debug it with Instruments, and saw a lot of CFHash etc stuff growing to infinity. By accident i clicked below the sheet, and temporary it helped, the main thread (AppKit ?) was releasing it's stuff, but just for a little time. I can't explain it to me, first of all I thought it was the access from my thread to the Progressbar to update the Progress (intervalled at 0,5sec), so I cut it out. But even if I'm not updating anything and did nothing with the Progressbar, my Application eat up all the Memory, because of not releasing it's "Main-Event" or whatsoever Stuff. Is there any possibility to "drain" this Main thread Memory stuff (Runloop / NSApp call?). And why the heck doesn't the Main thread respond anymore (after this simple task) ??? I don't have a clou anymore, please help ! Thanks in advance ! P.S. How do you guys implement "threaded long task" Stuff and updating your gui ???

    Read the article

  • Enable group policy for everything but the SBS?

    - by Jerry Dodge
    I have created a new group policy to disable IPv6 on all machines. There is only the one default OU, no special configuration. However, this policy shall not apply to the SBS its self (nor the other DC at another location on a different subnet) because those machines do depend on IPv6. All the rest do not. I did see a recommendation to create a new OU and put that machine under it, but many other comments say that is extremely messy and not recommended - makes it high maintenance when it comes to changing other group policies. How can I apply this single group policy to every machine except for the domain controllers? PS - Yes, I understand IPv6 will soon be the new standard, but until then, we have no intention to implement it, and it in fact is causing us many issues when enabled.

    Read the article

  • Write file need to optimised for heavy traffic part 2

    - by Clayton Leung
    For anyone interest to see where I come from you can refer to part 1, but it is not necessary. write file need to optimised for heavy traffic Below is a snippet of code I have written to capture some financial tick data from the broker API. The code will run without error. I need to optimize the code, because in peak hours the zf_TickEvent method will be call more than 10000 times a second. I use a memorystream to hold the data until it reaches a certain size, then I output it into a text file. The broker API is only single threaded. void zf_TickEvent(object sender, ZenFire.TickEventArgs e) { outputString = string.Format("{0},{1},{2},{3},{4}\r\n", e.TimeStamp.ToString(timeFmt), e.Product.ToString(), Enum.GetName(typeof(ZenFire.TickType), e.Type), e.Price, e.Volume); fillBuffer(outputString); } public class memoryStreamClass { public static MemoryStream ms = new MemoryStream(); } void fillBuffer(string outputString) { byte[] outputByte = Encoding.ASCII.GetBytes(outputString); memoryStreamClass.ms.Write(outputByte, 0, outputByte.Length); if (memoryStreamClass.ms.Length > 8192) { emptyBuffer(memoryStreamClass.ms); memoryStreamClass.ms.SetLength(0); memoryStreamClass.ms.Position = 0; } } void emptyBuffer(MemoryStream ms) { FileStream outStream = new FileStream("c:\\test.txt", FileMode.Append); ms.WriteTo(outStream); outStream.Flush(); outStream.Close(); } Question: Any suggestion to make this even faster? I will try to vary the buffer length but in terms of code structure, is this (almost) the fastest? When memorystream is filled up and I am emptying it to the file, what would happen to the new data coming in? Do I need to implement a second buffer to hold that data while I am emptying my first buffer? Or is c# smart enough to figure it out? Thanks for any advice

    Read the article

  • Dump vim screen into a file

    - by user18151
    I need to dump whatever is seen on the vim screen as it is, with colors and everything. Is there a way do to id. I am hoping that ncurses uses the same escape sequence for colors as bash. So when I'll do cat on the file that I've dumped the screen to, I should get the same output as the vim file. I want to use it in the scenario when I am doing a side by side colorful diff of files and need to print them. If anyone knows of any other side-by-side colorful diff programs, please feel free to mention in the comments (not answers because I am hoping that this question will be answered so that it can be used by others).

    Read the article

  • How to maximize parallel download from S3

    - by StCee
    I got a lot of images to load from Amazon S3 on a single page, and sometimes it takes quite some time to load all the images. I heard that splitting the images to load from different sub-domains would help parallel downloads, however what is the actual implementation on that? While it is easy to split for sub-domains like static,image, etc; Should I make like 10 sub-domains (image1, image2...) to load say 100 images? Or is there some clever ways to do? (By the way I am considering using memcache to cache the S3images; I am not sure if it is possible. I would be grateful for any further comments. Thanks a lot!

    Read the article

  • Access denied for user 'root@localhost' (using password:NO)

    - by murgatroid99
    I am attempting to install a network management package called cacti onto Ubuntu running under Windows Virtual PC. I attempted to install MySQL as it is one of cacti's dependencies. I can install and start the MySQL server, but whenever I try to access it in any other way, such as to change the password, I get the error message Access denied for user 'root@localhost' (using password:NO). I would like to know what is causing this and how to fix it. Edit: (just in case my comments are not visible) The answers from HD and Devin Ceartas did not work for me.

    Read the article

  • How to create batch file that delete all the folders named `bin` or `obj` recursively?

    - by Nam Gi VU
    I have the need of deleting all bin & obj folders in a folder on my PC. So, I'm thinking of a batch file to do that but I'm not famaliar with batching file in Windows. Please help. [Edit] After discussion with user DMA57361, I got to the current solution (still having problem though, see our comments): Create a .bat file, and paste the below command: start for /d /r . %%d in (bin,obj) do @if exist "%%d" rd /s/q "%%d" OR start for /d /r . %%d in (bin,obj) do @if exist "%%d" rd /s "%%d" @DMA57361: When I run your script, I get the below error. Any idea?

    Read the article

  • Adding to database. No repeat on refresh

    - by kevstarlive
    I have this code: Episode.php <?$feedback = new feedback; $articles = $feedback->fetch_all(); if (isset($_POST['name'], $_POST['post'])) { $cast = $_GET['id']; $name = $_POST['name']; $email = $_POST['email']; $post = nl2br ($_POST['post']); $ipaddress = $_SERVER['REMOTE_ADDR']; if (empty($name) or empty($post)) { $error = 'All Fields Are Required!'; }else{ $query = $pdo->prepare('INSERT INTO comments (cast, name, email, post, ipaddress) VALUES(?, ?, ?, ?, ?)'); $query->bindValue(1, $cast); $query->bindValue(2, $name); $query->bindValue(3, $email); $query->bindValue(4, $post); $query->bindValue(5, $ipaddress); $query->execute(); } }?> <div align="center"> <strong>Give us your feedback?</strong><br /><br /> <?php if (isset($error)) { ?> <small style="color:#aa0000;"><?php echo $error; ?></small><br /><br /> <?php } ?> <form action="episode.php?id=<?php echo $data['cast_id']; ?>" method="post" autocomplete="off" enctype="multipart/form-data"> <input type="text" name="name" placeholder="Name" /> / <input type="text" name="email" placeholder="Email" /><small style="color:#aa0000;">*</small><br /><br /> <textarea rows="10" cols="50" name="post" placeholder="Comment"></textarea><br /><br /> <input type="submit" onclick="myFunction()" value="Add Comment" /> <br /><br /> <small style="color:#aa0000;">* <b>Email will not be displayed publicly</b></small><br /> </form> </div> Include.php class feedback { public function fetch_all(){ global $pdo; $query = $pdo->prepare("SELECT * FROM comments"); $query->bindValue(1, $cast); $query->execute(); return $query->fetchAll(); } } This code updates to the database as it is suppose to. But after submission it reloads the current page as mentioned in the form action. But when I refresh the page to see the comment being added it asks to re submit. If I hit submit then the comment adds again. How can I stop this from happening? Maybe I could hide the comment box and display a thank you message but that would not stop a repeat entry. Please help. Thank you. Kev

    Read the article

  • Java - multithreaded access to a local value store which is periodically cleared

    - by Telax
    I'm hoping for some advice or suggestions on how best to handle multi threaded access to a value store. My local value storage is designed to hold onto objects which are currently in use. If the object is not in use then it is removed from the store. A value is pumped into my store via thread1, its entry into the store is announced to listeners, and the value is stored. Values coming in on thread1 will either be totally new values or updates for existing values. A timer is used to periodically remove any value from the store which is not currently in use and so all that remains of this value is its ID held locally by an intermediary. Now, an active element on thread2 may wake up and try to access a set of values by passing a set of value IDs which it knows about. Some values will be stored already (great) and some may not (sadface). Those values which are not already stored will be retrieved from an external source. My main issue is that items which have not already been stored and are currently being queried for may arrive in on thread1 before the query is complete. I'd like to try and avoid locking access to the store whilst a query is being made as it may take some time.

    Read the article

  • ssh on Window Vista and Ubuntu 12.04

    - by Adebayo
    Greeting to all. On my Fijitsu system with intel processor, I could not ssh from my Window Vista partition using PUTTY and also could not ssh from my Ubuntu 12.04 partition. I am try to ssh into a remote machine where I have an account but I always get Connection refused. But from the desktop computer in my office using the same PUTTY I could ssh to remote machine. I have tried to follow several comments but none has worked for me. Please, I need help.

    Read the article

  • linux process scheduling delayed for long time

    - by Medicine
    I have done strace on my multi-threaded c++ application running on linux after couple hours of running, none of the threads got run, for about 12 seconds. I have seen that the unfinished select system call which is called with a timeout was unfinished before the thread was suspended, reported after it resumed that, it took 11.x seconds for the operation to finish. This is clear indication that the process got starved for a long time. All threads in the process are created with default scheduling policy(SCHED_OTHER) of linux and default priority. There are another 5 similar apps running on the same box which are also heavy I/O bound like this app due to heavy data received on the socket. But most of the time, this app is getting scheduled delay. The other apps are created with same sched policy and priority as this i.e. the defaults. why is only this process gets blocked almost all of the time? Could it be because this process is more I/O intensive as in more busy due to may be higher rates of data? So, the linux dynamic priority adjusting in play here which pushed this process down?

    Read the article

  • Issue with Administratively Assigned Offline Files

    - by ZnewmaN
    I need to use Administratively Assigned Offline files in conjunction with folder redirection, but user home folders live on 26 different shares. Do I just need to add 52 file paths similar to: \\server\shareA\%username%\Desktop \\server\shareA\%username%\My Documents \\server\shareB\%username%\Desktop \\server\shareB\%username%\My Documents ... and so on? Or do I need to create 26 GPOs, one for each share; or is there an easier way to do it? Edit: The solution provided by @berniewhite in the comments of using %homeshare% has resolved the issue and Administratively Assigned Offline Files is now working well.

    Read the article

  • Unable to update the EntitySet because it has a DefiningQuery and no &lt;UpdateFunction&gt; element

    - by Harish Ranganathan
    When working with ADO.NET Entity Data Model, its often common that we generate entity schema for more than a single table from our Database.  With Entity Model generation automated with Visual Studio support, it becomes even tempting to create and work entity models to achieve an object mapping relationship. One of the errors that you might hit while trying to update an entity set either programmatically using context.SaveChanges or while using the automatic insert/update code generated by GridView etc., is “Unable to update the EntitySet <EntityName> because it has a DefiningQuery and no <UpdateFunction> element exists in the <ModificationFunctionMapping> element to support the current operation” While the description is pretty lengthy, the immediate thing that would come to our mind is to open our the entity model generated code and see if you can update it accordingly. However, the first thing to check if that, if the Entity Set is generated from a table, whether the Table defines a primary key.  Most of the times, we create tables with primary keys.  But some reference tables and tables which don’t have a primary key cannot be updated using the context of Entity and hence it would throw this error.  Unless it is a View, in which case, the default model is read-only, most of the times the above error occurs since there is no primary key defined in the table. There are other reasons why this error could popup which I am not going into for the sake of simplicity of the post.  If you find something new, please feel free to share it in comments. Hope this helps. Cheers !!!

    Read the article

  • Social Shopping

    - by David Dorf
    I've written about various breeds of social shopping in the past, so I decided to give some thought into a categorization with examples. Below I've listed the different types of social shopping I've observed and some companies that support them. Comments and Ratings -- Commenting on products has been around almost as long as e-commerce. Two popular players in this space are BazaarVoice and PowerReviews. Most shoppers prefer relying on peer reviews rather than retailer descriptions, so the influence over sales is very strong. f-commerce -- A new term that was sure to rear its ugly head when retailers started allowing shopping on Facebook, And its all Elastic Path and Alvenda's fault! Co-shopping -- Retailers like Wet Seal are enabling multiple people to shop together online. This is particularly applicable to fashion, where the real-time exchange of opinions is important. I actually tried this with a co-worker and its pretty cool. Bragging -- Blippy is Twitter for shoppers, allowing purchases to be "tweeted" so you can keep up with your friends. I get alerted when friends download music or apps from iTunes because chances are I'll be interested as well. This covert influence is one-up'ed by Snatter, a service that gives people discounts for tweeting or posting promotions from retailers. This is the petri dish of viral marketing. Advice -- Combine the bragging of Blippy and the opinions from BazaarVoice and you'd get ShopSocially, a social network dedicated to spreading product knowledge amongst informed shoppers. I'm sure if I gave it more thought, a few more types would come to mind, but I've got to get back to work. Now is not the time to be blogging at Oracle!

    Read the article

  • WiX 3 Tutorial: Understanding main WXS and WXI file

    - by Mladen Prajdic
    In the previous post we’ve taken a look at the WiX solution/project structure and project properties. We’re still playing with our super SuperForm application and today we’ll take a look at the general parts of the main wxs file, SuperForm.wxs, and the wxi include file. For wxs file we’ll just go over the general description of what each part does in the code comments. The more detailed descriptions will be in future posts about features themselves. WXI include file Include files are exactly what their name implies. To include a wxi file into the wxs file you have to put the wxi at the beginning of each .wxs file you wish to include it in. If you’ve ever worked with C++ you can think of the include files as .h files. For example if you include SuperFormVariables.wxi into the SuperForm.wxs, the variables in the wxi won’t be seen in FilesFragment.wxs or RegistryFragment.wxs. You’d have to include it manually into those two wxs files too. For preprocessor variable $(var.VariableName) to be seen by every file in the project you have to include them in the WiX project properties->Build->“Define preprocessor variables” textbox. This is why I’ve chosen not to go this route because in multi developer teams not everyone has the same directory structure and having a single variable would mean each developer would have to checkout the wixproj file to edit the variable. This is pretty much unacceptable by my standards. This is why we’ve added a System Environment variable named SuperFormFilesDir as is shown in the previous Wix Tutorial post. Because the FilesFragment.wxs is autogenerated on every project build we don’t want to change it manually each time by adding the include wxi at the beginning of the file. This way we couldn’t recreate it in each pre-build event. <?xml version="1.0" encoding="utf-8"?><Include> <!-- Versioning. These have to be changed for upgrades. It's not enough to just include newer files. --> <?define MajorVersion="1" ?> <?define MinorVersion="0" ?> <?define BuildVersion="0" ?> <!-- Revision is NOT used by WiX in the upgrade procedure --> <?define Revision="0" ?> <!-- Full version number to display --> <?define VersionNumber="$(var.MajorVersion).$(var.MinorVersion).$(var.BuildVersion).$(var.Revision)" ?> <!-- Upgrade code HAS to be the same for all updates. Once you've chosen it don't change it. --> <?define UpgradeCode="YOUR-GUID-HERE" ?> <!-- Path to the resources directory. resources don't really need to be included in the project structure but I like to include them for for clarity --> <?define ResourcesDir="$(var.ProjectDir)\Resources" ?> <!-- The name of your application exe file. This will be used to kill the process when updating and creating the desktop shortcut --> <?define ExeProcessName="SuperForm.MainApp.exe" ?></Include> For now there’s no way to tell WiX in Visual Studio to have a wxi include file available to the whole project, so you have to include it in each file separately. Only variables set in “Define preprocessor variables” or System Environment variables are accessible to the whole project for now. The main WXS file: SuperForm.wxs We’ll only take a look at the general structure of the main SuperForm.wxs and not its the details. We’ll cover the details in future posts. The code comments should provide plenty info about what each part does in general. Basically there are 5 major parts. The update part, the conditions and actions part, the UI install sequence, the directory structure and the features we want to include. <?xml version="1.0" encoding="UTF-8"?><!-- Add xmlns:util namespace definition to be able to use stuff from WixUtilExtension dll--><Wix xmlns="http://schemas.microsoft.com/wix/2006/wi" xmlns:util="http://schemas.microsoft.com/wix/UtilExtension"> <!-- This is how we include wxi files --> <?include $(sys.CURRENTDIR)Includes\SuperFormVariables.wxi ?> <!-- Id="*" is to enable upgrading. * means that the product ID will be autogenerated on each build. Name is made of localized product name and version number. --> <Product Id="*" Name="!(loc.ProductName) $(var.VersionNumber)" Language="!(loc.LANG)" Version="$(var.VersionNumber)" Manufacturer="!(loc.ManufacturerName)" UpgradeCode="$(var.UpgradeCode)"> <!-- Define the minimum supported installer version (3.0) and that the install should be done for the whole machine not just the current user --> <Package InstallerVersion="300" Compressed="yes" InstallScope="perMachine"/> <Media Id="1" Cabinet="media1.cab" EmbedCab="yes" /> <!-- Upgrade settings. This will be explained in more detail in a future post --> <Upgrade Id="$(var.UpgradeCode)"> <UpgradeVersion OnlyDetect="yes" Minimum="$(var.VersionNumber)" IncludeMinimum="no" Property="NEWER_VERSION_FOUND" /> <UpgradeVersion Minimum="0.0.0.0" IncludeMinimum="yes" Maximum="$(var.VersionNumber)" IncludeMaximum="no" Property="OLDER_VERSION_FOUND" /> </Upgrade> <!-- Reference the global NETFRAMEWORK35 property to check if it exists --> <PropertyRef Id="NETFRAMEWORK35"/> <!-- Startup conditions that checks if .Net Framework 3.5 is installed or if we're running the OS higher than Windows XP SP2. If not the installation is aborted. By doing the (Installed OR ...) property means that this condition will only be evaluated if the app is being installed and not on uninstall or changing --> <Condition Message="!(loc.DotNetFrameworkNeeded)"> <![CDATA[Installed OR NETFRAMEWORK35]]> </Condition> <Condition Message="!(loc.AppNotSupported)"> <![CDATA[Installed OR ((VersionNT >= 501 AND ServicePackLevel >= 2) OR (VersionNT >= 502))]]> </Condition> <!-- This custom action in the InstallExecuteSequence is needed to stop silent install (passing /qb to msiexec) from going around it. --> <CustomAction Id="NewerVersionFound" Error="!(loc.SuperFormNewerVersionInstalled)" /> <InstallExecuteSequence> <!-- Check for newer versions with FindRelatedProducts and execute the custom action after it --> <Custom Action="NewerVersionFound" After="FindRelatedProducts"> <![CDATA[NEWER_VERSION_FOUND]]> </Custom> <!-- Remove the previous versions of the product --> <RemoveExistingProducts After="InstallInitialize"/> <!-- WixCloseApplications is a built in custom action that uses util:CloseApplication below --> <Custom Action="WixCloseApplications" Before="InstallInitialize" /> </InstallExecuteSequence> <!-- This will ask the user to close the SuperForm app if it's running while upgrading --> <util:CloseApplication Id="CloseSuperForm" CloseMessage="no" Description="!(loc.MustCloseSuperForm)" ElevatedCloseMessage="no" RebootPrompt="no" Target="$(var.ExeProcessName)" /> <!-- Use the built in WixUI_InstallDir GUI --> <UIRef Id="WixUI_InstallDir" /> <UI> <!-- These dialog references are needed for CloseApplication above to work correctly --> <DialogRef Id="FilesInUse" /> <DialogRef Id="MsiRMFilesInUse" /> <!-- Here we'll add the GUI logic for installation and updating in a future post--> </UI> <!-- Set the icon to show next to the program name in Add/Remove programs --> <Icon Id="SuperFormIcon.ico" SourceFile="$(var.ResourcesDir)\Exclam.ico" /> <Property Id="ARPPRODUCTICON" Value="SuperFormIcon.ico" /> <!-- Installer UI custom pictures. File names are made up. Add path to your pics. –> <!-- <WixVariable Id="WixUIDialogBmp" Value="MyAppLogo.jpg" /> <WixVariable Id="WixUIBannerBmp" Value="installBanner.jpg" /> --> <!-- the default directory structure --> <Directory Id="TARGETDIR" Name="SourceDir"> <Directory Id="ProgramFilesFolder"> <Directory Id="INSTALLLOCATION" Name="!(loc.ProductName)" /> </Directory> </Directory> <!-- Set the default install location to the value of INSTALLLOCATION (usually c:\Program Files\YourProductName) --> <Property Id="WIXUI_INSTALLDIR" Value="INSTALLLOCATION" /> <!-- Set the components defined in our fragment files that will be used for our feature --> <Feature Id="SuperFormFeature" Title="!(loc.ProductName)" Level="1"> <ComponentGroupRef Id="SuperFormFiles" /> <ComponentRef Id="cmpVersionInRegistry" /> <ComponentRef Id="cmpIsThisUpdateInRegistry" /> </Feature> </Product></Wix> For more info on what certain attributes mean you should look into the WiX Documentation.   WiX 3 tutorial by Mladen Prajdic navigation WiX 3 Tutorial: Solution/Project structure and Dev resources WiX 3 Tutorial: Understanding main wxs and wxi file WiX 3 Tutorial: Generating file/directory fragments with Heat.exe

    Read the article

  • SQL SERVER – Subquery or Join – Various Options – SQL Server Engine knows the Best

    - by pinaldave
    This is followup post of my earlier article SQL SERVER – Convert IN to EXISTS – Performance Talk, after reading all the comments I have received I felt that I could write more on the same subject to clear few things out. First let us run following four queries, all of them are giving exactly same resultset. USE AdventureWorks GO -- use of = SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID = ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- use of in SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID IN ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- use of exists SELECT * FROM HumanResources.Employee E WHERE EXISTS ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- Use of Join SELECT * FROM HumanResources.Employee E INNER JOIN HumanResources.EmployeeAddress EA ON E.EmployeeID = EA.EmployeeID GO Let us compare the execution plan of the queries listed above. Click on image to see larger image. It is quite clear from the execution plan that in case of IN, EXISTS and JOIN SQL Server Engines is smart enough to figure out what is the best optimal plan of Merge Join for the same query and execute the same. However, in the case of use of Equal (=) Operator, SQL Server is forced to use Nested Loop and test each result of the inner query and compare to outer query, leading to cut the performance. Please note that here I no mean suggesting that Nested Loop is bad or Merge Join is better. This can very well vary on your machine and amount of resources available on your computer. When I see Equal (=) operator used in query like above, I usually recommend to see if user can use IN or EXISTS or JOIN. As I said, this can very much vary on different system. What is your take in above query? I believe SQL Server Engines is usually pretty smart to figure out what is ideal execution plan and use it. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Joins, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Fix: Orchard Error ‘The controller for path '/OrchardLocal/' was not found or does not implement IController.

    - by Ken Cox [MVP]
    Suddenly, in a local Orchard 1.6 project, I started getting this error in ShellRoute.cs: The controller for path '/OrchardLocal/' was not found or does not implement IController. Obviously I had changed something, but the error wasn’t helping much.  After losing far too much time, I copied over the original Orchard source code and was back in business. Shortly thereafter, I further flattened my forehead by applying a sudden, solid blow with the lower portion of my palm! You see, in testing the importing of comments via blogML, I had set the added blog as the Orchard site’s Start page. Then, I deleted the blog so I could test another import batch. The upshot was that by deleting the blog, Orchard no longer had a default (home) page at the root of the site. The site’s default content was missing. The fix was to go to the Admin subdirectory (http://localhost:30320/OrchardLocal/admin) . add a new page, and check Set as homepage. Once again, the problem was between the keyboard and the chair. I hope this helps someone else. Ken

    Read the article

  • SQL SERVER – Shrinking NDF and MDF Files – Readers’ Opinion

    - by pinaldave
    Previously, I had written a blog post about SQL SERVER – Shrinking NDF and MDF Files – A Safe Operation. After that, I have written the following blog post that talks about the advantage and disadvantage of Shrinking and why one should not be Shrinking a file SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008. On this subject, SQL Server Expert Imran Mohammed left an excellent comment. I just feel that his comment is worth a big article itself. For everybody to read his wonderful explanation, I am posting this blog post here. Thanks Imran! Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. Comments from Imran - Before I explain to you the concept of Shrink Database, let us understand the concept of Database Files. When we create a new database inside the SQL Server, it is typical that SQl Server creates two physical files in the Operating System: one with .MDF Extension, and another with .LDF Extension. .MDF is called as Primary Data File. .LDF is called as Transactional Log file. If you add one or more data files to a database, the physical file that will be created in the Operating System will have an extension of .NDF, which is called as Secondary Data File; whereas, when you add one or more log files to a database, the physical file that will be created in the Operating System will have the same extension as .LDF. The questions now are, “Why does a new data file have a different extension (.NDF)?”, “Why is it called as a secondary data file?” and, “Why is .MDF file called as a primary data file?” Answers: Note: The following explanation is based on my limited knowledge of SQL Server, so experts please do comment. A data file with a .MDF extension is called a Primary Data File, and the reason behind it is that it contains Database Catalogs. Catalogs mean Meta Data. Meta Data is “Data about Data”. An example for Meta Data includes system objects that store information about other objects, except the data stored by the users. sysobjects stores information about all objects in that database. sysindexes stores information about all indexes and rows of every table in that database. syscolumns stores information about all columns that each table has in that database. sysusers stores how many users that database has. Although Meta Data stores information about other objects, it is not the transactional data that a user enters; rather, it’s a system data about the data. Because Primary Data File (.MDF) contains important information about the database, it is treated as a special file. It is given the name Primary Data file because it contains the Database Catalogs. This file is present in the Primary File Group. You can always create additional objects (Tables, indexes etc.) in the Primary data file (This file is present in the Primary File group), by mentioning that you want to create this object under the Primary File Group. Any additional data file that you add to the database will have only transactional data but no Meta Data, so that’s why it is called as the Secondary Data File. It is given the extension name .NDF so that the user can easily identify whether a specific data file is a Primary Data File or a Secondary Data File(s). There are many advantages of storing data in different files that are under different file groups. You can put your read only in the tables in one file (file group) and read-write tables in another file (file group) and take a backup of only the file group that has read the write data, so that you can avoid taking the backup of a read-only data that cannot be altered. Creating additional files in different physical hard disks also improves I/O performance. A real-time scenario where we use Files could be this one: Let’s say you have created a database called MYDB in the D-Drive which has a 50 GB space. You also have 1 Database File (.MDF) and 1 Log File on D-Drive and suppose that all of that 50 GB space has been used up and you do not have any free space left but you still want to add an additional space to the database. One easy option would be to add one more physical hard disk to the server, add new data file to MYDB database and create this new data file in a new hard disk then move some of the objects from one file to another, and put the file group under which you added new file as default File group, so that any new object that is created gets into the new files, unless specified. Now that we got a basic idea of what data files are, what type of data they store and why they are named the way they are, let’s move on to the next topic, Shrinking. First of all, I disagree with the Microsoft terminology for naming this feature as “Shrinking”. Shrinking, in regular terms, means to reduce the size of a file by means of compressing it. BUT in SQL Server, Shrinking DOES NOT mean compressing. Shrinking in SQL Server means to remove an empty space from database files and release the empty space either to the Operating System or to SQL Server. Let’s examine this through an example. Let’s say you have a database “MYDB” with a size of 50 GB that has a free space of about 20 GB, which means 30GB in the database is filled with data and the 20 GB of space is free in the database because it is not currently utilized by the SQL Server (Database); it is reserved and not yet in use. If you choose to shrink the database and to release an empty space to Operating System, and MIND YOU, you can only shrink the database size to 30 GB (in our example). You cannot shrink the database to a size less than what is filled with data. So, if you have a database that is full and has no empty space in the data file and log file (you don’t have an extra disk space to set Auto growth option ON), YOU CANNOT issue the SHRINK Database/File command, because of two reasons: There is no empty space to be released because the Shrink command does not compress the database; it only removes the empty space from the database files and there is no empty space. Remember, the Shrink command is a logged operation. When we perform the Shrink operation, this information is logged in the log file. If there is no empty space in the log file, SQL Server cannot write to the log file and you cannot shrink a database. Now answering your questions: (1) Q: What are the USEDPAGES & ESTIMATEDPAGES that appear on the Results Pane after using the DBCC SHRINKDATABASE (NorthWind, 10) ? A: According to Books Online (For SQL Server 2000): UsedPages: the number of 8-KB pages currently used by the file. EstimatedPages: the number of 8-KB pages that SQL Server estimates the file could be shrunk down to. Important Note: Before asking any question, make sure you go through Books Online or search on the Google once. The reasons for doing so have many advantages: 1. If someone else already has had this question before, chances that it is already answered are more than 50 %. 2. This reduces your waiting time for the answer. (2) Q: What is the difference between Shrinking the Database using DBCC command like the one above & shrinking it from the Enterprise Manager Console by Right-Clicking the database, going to TASKS & then selecting SHRINK Option, on a SQL Server 2000 environment? A: As far as my knowledge goes, there is no difference, both will work the same way, one advantage of using this command from query analyzer is, your console won’t be freezed. You can do perform your regular activities using Enterprise Manager. (3) Q: What is this .NDF file that is discussed above? I have never heard of it. What is it used for? Is it used by end-users, DBAs or the SERVER/SYSTEM itself? A: .NDF File is a secondary data file. You never heard of it because when database is created, SQL Server creates database by default with only 1 data file (.MDF) and 1 log file (.LDF) or however your model database has been setup, because a model database is a template used every time you create a new database using the CREATE DATABASE Command. Unless you have added an extra data file, you will not see it. This file is used by the SQL Server to store data which are saved by the users. Hope this information helps. I would like to as the experts to please comment if what I understand is not what the Microsoft guys meant. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Password Security: Short and Complex versus ‘Short or Lengthy’ and Less Complex

    - by Akemi Iwaya
    Creating secure passwords for our online accounts is a necessary evil due to the huge increase in database and account hacking that occurs these days. The problem though is that no two companies have a similar policy for complex and secure password creation, then factor in the continued creation of insecure passwords or multi-site use of the same password and trouble is just waiting to happen. Ars Technica decided to take a look at multiple password types, how users fared with them, and how well those password types held up to cracking attempts in their latest study. The password types that Ars Technica looked at were comprehensive8, basic8, and basic16. The comprehensive type required a variety of upper-case, lower-case, digits, and symbols with no dictionary words allowed. The only restriction on the two basic types was the number of characters used. Which type do you think was easier for users to adopt and did better in the two password cracking tests? You can learn more about how well users did with the three password types and the results of the tests by visiting the article linked below. What are your thoughts on the matter? Are shorter, more complex passwords better or worse than using short or long, but less complex passwords? What methods do you feel work best since most passwords are limited to approximately 16 characters in length? Perhaps you use a service like LastPass or keep a dedicated list/notebook to manage your passwords. Let us know in the comments!    

    Read the article

  • [Visual Studio Extension Of The Day] Test Scribe for Visual Studio Ultimate 2010 and Test Professional 2010

    - by Hosam Kamel
      Test Scribe is a documentation power tool designed to construct documents directly from the TFS for test plan and test run artifacts for the purpose of discussion, reporting etc... . Known Issues/Limitations Customizing the generated report by changing the template, adding comments, including attachments etc… is not supported While opening a test plan summary document in  Office 2007, if you get the warning: “The file Test Plan Summary cannot be opened because there are problems with the contents” (with Details: ‘The file is corrupt and cannot be opened’), click ‘OK’. Then, click ‘Yes’ to recover the contents of the document. This will then open the document in Office 2007. The same problem is not found in Office 2010. Generated documents are stored by default in the “My documents” folder. The output path of the generated report cannot be modified. Exporting word documents for individual test suites or test cases in a test plan is not supported. Download it from Visual Studio Extension Manager Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • Microsoft launches IE9 preview – No support for XP

    - by samsudeen
    Microsoft launched the developer preview version of Internet Explorer 9 (IE9) at MIX 10 web conference yesterday.This release is aimed getting the feedback from website designers , developers and other community to make IE9 development better from its previous versions. Microsoft will update the developer preview every eight weeks and the next update is expected on mid of march.So what is new and interesting  about IE9 Chakra Chakra (The new scripting engine of IE9) renders the Java script much faster compared to IE8 and other browsers thus improving the performance significantly.According to Microsoft Chakra renders the java script in background with a separate thread parallel to the main engine which is complete new way of rendering from the current browser technologies Standards Microsoft is desperate to make ( surprisingly!!!) IE9 compliance to  web standards by supporting the open standards such as Accelerated support for HTML5 video support for new web technologies such as CSS3 and SVG2. ACID3 Test IE9 scores (55/100) in its latest ACID3 test which is much better compared to the IE8 score (22/100) but not even  nearer to their rivals Chrome, Opera, and Safari which scores 100/100 in ACID3 testing I am little disappointed over not able to download the  developer preview on my XP machine. The early comments looks much positive for IE9.If you want to explore IE9,check the Microsoft Test drive site  at Microsoft IE9 Test-drive You can also download the IE9 developer preview at Download Preview Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >