Search Results

Search found 72319 results on 2893 pages for 'file explorer'.

Page 826/2893 | < Previous Page | 822 823 824 825 826 827 828 829 830 831 832 833  | Next Page >

  • create xml from object

    - by Gannesh
    Basically i want to create XMLDesigner kind of thing in Flex, using which user can add/edit components and properties of view/dashboard. i am storing view structure in a xml file. i parsed that file at runtime and display view. How to convert an object (having properties and sub-objects) to xml node (having attributes and elements) and add that xml to the existing xml file. so that next time when i parsed xml file i'll get that new component in my view/dashboard. for e.g, object structure of component in xml file : <view id="productView" label="Products"> <panel id="chartPanel" type="CHART" ChartType="Pie2D" title="Productwise Sales" x="215" y="80" width="425" height="240" showValues="0" > </panel> </view> Thanks in Advance.

    Read the article

  • HowTo set Icon to Qt Application, created with Qt Visual Studio Add-in?

    - by mosg
    Hello. Here is what I have: Visual Studio 2008 (on 32-bit Windows XP) Qt libraries 4.6.2 for Windows (VS 2008, 194 MB) Visual Studio Add-in (44 MB) After I installed all the software, I created simple Qt Application project, with Visual Studio: menu File | New | Project... and Qt4 Projects | Qt Application. Build it, and here is the question: how to set application icon to my compiled exe file? I need to see specified ICO in explorer! Old method with MyProject.pro not interested!!! Create a .ico file with both 16x16 and 32x32-pixel versions of the icon (you can do this in Visual Studio). Create a .rc file containing the following text: IDI_ICON1 ICON DISCARDABLE "myIcon.ico" Add the following to your .pro file RC_FILE = myFile.rc Run qmake. Thanks.

    Read the article

  • Silverlight MEF – Download On Demand

    - by PeterTweed
    Take the Slalom Challenge at www.slalomchallenge.com! A common challenge with building complex applications in Silverlight is the initial download size of the xap file.  MEF enables us to build composable applications that allows us to build complex composite applications.  Wouldn’t it be great if we had a mechanism to spilt out components into different Silverlight applications in separate xap files and download the separate xap file only if needed?   MEF gives us the ability to do this.  This post will cover the basics needed to build such a composite application split between different silerlight applications and download the referenced silverlight application only when needed. Steps: 1.     Create a Silverlight 4 application 2.     Add references to the following assemblies: System.ComponentModel.Composition.dll System.ComponentModel.Composition.Initialization.dll 3.     Add a new Silverlight 4 application called ExternalSilverlightApplication to the solution that was created in step 1.  Ensure the new application is hosted in the web application for the solution and choose to not create a test page for the new application. 4.     Delete the App.xaml and MainPage.xaml files – they aren’t needed. 5.     Add references to the following assemblies in the ExternalSilverlightApplication project: System.ComponentModel.Composition.dll System.ComponentModel.Composition.Initialization.dll 6.     Ensure the two references above have their Copy Local values set to false.  As we will have these two assmblies in the original Silverlight application, we will have no need to include them in the built ExternalSilverlightApplication build. 7.     Add a new user control called LeftControl to the ExternalSilverlightApplication project. 8.     Replace the LayoutRoot Grid with the following xaml:     <Grid x:Name="LayoutRoot" Background="Beige" Margin="40" >         <Button Content="Left Content" Margin="30"></Button>     </Grid> 9.     Add the following statement to the top of the LeftControl.xaml.cs file using System.ComponentModel.Composition; 10.   Add the following attribute to the LeftControl class     [Export(typeof(LeftControl))]   This attribute tells MEF that the type LeftControl will be exported – i.e. made available for other applications to import and compose into the application. 11.   Add a new user control called RightControl to the ExternalSilverlightApplication project. 12.   Replace the LayoutRoot Grid with the following xaml:     <Grid x:Name="LayoutRoot" Background="Green" Margin="40"  >         <TextBlock Margin="40" Foreground="White" Text="Right Control" FontSize="16" VerticalAlignment="Center" HorizontalAlignment="Center" ></TextBlock>     </Grid> 13.   Add the following statement to the top of the RightControl.xaml.cs file using System.ComponentModel.Composition; 14.   Add the following attribute to the RightControl class     [Export(typeof(RightControl))] 15.   In your original Silverlight project add a reference to the ExternalSilverlightApplication project. 16.   Change the reference to the ExternalSilverlightApplication project to have it’s Copy Local value = false.  This will ensure that the referenced ExternalSilverlightApplication Silverlight application is not included in the original Silverlight application package when it it built.  The ExternalSilverlightApplication Silverlight application therefore has to be downloaded on demand by the original Silverlight application for it’s controls to be used. 1.     In your original Silverlight project add the following xaml to the LayoutRoot Grid in MainPage.xaml:         <Grid.RowDefinitions>             <RowDefinition Height="65*" />             <RowDefinition Height="235*" />         </Grid.RowDefinitions>         <Button Name="LoaderButton" Content="Download External Controls" Click="Button_Click"></Button>         <StackPanel Grid.Row="1" Orientation="Horizontal" HorizontalAlignment="Center" >             <Border Name="LeftContent" Background="Red" BorderBrush="Gray" CornerRadius="20"></Border>             <Border Name="RightContent" Background="Red" BorderBrush="Gray" CornerRadius="20"></Border>         </StackPanel>       The borders will hold the controls that will be downlaoded, imported and composed via MEF when the button is clicked. 2.     Add the following statement to the top of the MainPage.xaml.cs file using System.ComponentModel.Composition; 3.     Add the following properties to the MainPage class:         [Import(typeof(LeftControl))]         public LeftControl LeftUserControl { get; set; }         [Import(typeof(RightControl))]         public RightControl RightUserControl { get; set; }   This defines properties accepting LeftControl and RightControl types.  The attrributes are used to tell MEF the discovered type that should be applied to the property when composition occurs. 17.   Add the following event handler for the button click to the MainPage.xaml.cs file:         private void Button_Click(object sender, RoutedEventArgs e)         {                   DeploymentCatalog deploymentCatalog =     new DeploymentCatalog("ExternalSilverlightApplication.xap");                   CompositionHost.Initialize(deploymentCatalog);                   deploymentCatalog.DownloadCompleted += (s, i) =>                 {                     if (i.Error == null)                     {                         CompositionInitializer.SatisfyImports(this);                           LeftContent.Child = LeftUserControl;                         RightContent.Child = RightUserControl;                         LoaderButton.IsEnabled = false;                     }                 };                   deploymentCatalog.DownloadAsync();         } This is where the magic happens!  The deploymentCatalog object is pointed to the ExternalSilverlightApplication.xap file.  It is then associated with the CompositionHost initialization.  As the download will be asynchronous, an eventhandler is created for the DownloadCompleted event.  The deploymentCatalog object is then told to start the asynchronous download. The event handler that executes when the download is completed uses the CompositionInitializer.SatisfyImports() function to tell MEF to satisfy the Imports for the current class.  It is at this point that the LeftUserControl and RightUserControl properties are initialized with composed objects from the downloaded ExternalSilverlightApplication.xap package. 18.   Run the application click the Download External Controls button and see the controls defined in the ExternalSilverlightApplication application loaded into the original Silverlight application. Congratulations!  You have implemented download on demand capabilities for composite applications using the MEF DeploymentCatalog class.  You are now able to segment your applications into separate xap file for deployment.

    Read the article

  • User Productivity Kit - Powerful Packages (Part 2)

    - by [email protected]
    In my first post on packages I described what a package is and how it can be used. I also started explaining some of the considerations that should be taken into account when determining how to arrange your packages. The first is when the files are interrelated and depend on one another such as an HTML file and it's graphics. A second consideration is how the files are used in your outlines. Let's say you're using a dozen Word doc files. You could place them all in a single package or put each Word doc file in a separate package but what's the right thing to do? There are several factors that will influence your decision. To understand the first, let me explain a function of UPK publishing. Take an outline in UPK that has an attachment (concept, frame link, or hyperlink) that points to a file in a package. When you publish this outline, the publishing engine will determine that there is a link to a file in the package and copy the contents of the package to the publishing destination directory. This is done to ensure that any interrelated files are kept together. For the situation where you have an HTML file with links to number of graphics files, this is a good thing. If, however, the package has a dozen unrelated Word doc files and you link to only one of them, all dozen Word documents will be copied to the publishing destination directory.  Whether or not this is a good thing is dependent on two things. First, are all of the files in the package used in the outline that you're publishing? Take an outline that includes links to all of the Word documents in that dozen document package I described earlier. For this situation, you may choose to keep all the files in a single package for convenience. A second consideration is how your organization leverages reuse in UPK. In this context, I'm referring to the link style of reuse such as when you link to the same topic from multiple UPK outlines and changes to the topic appear in both places. Take an example where you have the earlier mentioned dozen Word document package and an outline with a dozen topics in it. Each topic has an attachment pointing to one of the Word documents in the package (frame link, concept, etc.) If you're only publishing this outline, the single package probably works fine but what if you're reusing one of these topics in another outline? As I explained earlier, linking to one file in the package will result in all files in the package being copied to your published output. In this example, linking to one topic in the first outline will result in all dozen Word documents being copied to the published output. This may result in files in the output that you don't want there for business or size reasons. This is a situation in which you should consider placing each of the Word documents in it's own separate package. With each document in it's own package, that link to a single document will result in only that single package and single Word document being copied to the published output. In my last post I had described that packages are documents in the UPK library. When using the multi-user version of the UPK Developer you can leverage standard library capabilities for managing the files in these packages during the development process - capabilities such as check in / check out, history, etc. When structuring your packages take into consideration how the authors are going to be adding, modifying and deleting files from the packages. A single package is a single document in the UPK library. Like any other document in the library, a single user can check out the package and edit it at a time. If you have a large number of files in a single package and these must be modified by many users, you need to consider whether this will cause problems as multiple users compete to update the same package. If the files don't depend on each other consider placing the files in separate packages to reduce contention. I hope you've enjoyed these two posts on how you can leverage the power of packages in your content. In summary, consider the following when structuring your packages: Is the asset a single, standalone file or a set of files that depend on each other? Will all the files always be used together in a single outline or may only some of the files be needed based on how the content is reused across multiple outlines? Will multiple developers need to update the files in a single package or should you break it into multiple packages to reduce contention when checking out the document? We'd like to hear from you on how you're using packages in your content. Please add your comments below! Thank you and I hope these two posts have given you additional insights into how to use packages in your content and structure them for efficient use. John Zaums Senior Director, Product Development Oracle User Productivity Kit

    Read the article

  • Using Logger Filter with Not Equal condition Log4net

    - by Mubashar Ahmad
    Dears I am using log4net in my c# application i have different logger which insert text lines in my single log file. But now i wanted to add a new logger which should not post log entries in the same file rather i should log in a different file so i configured a new fileAppender, After doing whatever i found on the net i am able to create a different file for my new logger but it echoes the same value in first log file too. so please if anybody knows about the use of LogFilters so that i could add "Logger < New logger " match in previously configured appender. Regards Mubashar

    Read the article

  • T-SQL Tuesday #21 - Crap!

    - by Most Valuable Yak (Rob Volk)
    Adam Machanic's (blog | twitter) ever popular T-SQL Tuesday series is being held on Wednesday this time, and the topic is… SHIT CRAP. No, not fecal material.  But crap code.  Crap SQL.  Crap ideas that you thought were good at the time, or were forced to do due (doo-doo?) to lack of time. The challenge for me is to look back on my SQL Server career and find something that WASN'T crap.  Well, there's a lot that wasn't, but for some reason I don't remember those that well.  So the additional challenge is to pick one particular turd that I really wish I hadn't squeezed out.  Let's see if this outline fits the bill: An ETL process on text files; That had to interface between SQL Server and an AS/400 system; That didn't use SSIS (should have) or BizTalk (ummm, no) but command-line scripting, using Unix utilities(!) via: xp_cmdshell; That had to email reports and financial data, some of it sensitive Yep, the stench smell is coming back to me now, as if it was yesterday… As to why SSIS and BizTalk were not options, basically I didn't know either of them well enough to get the job done (and I still don't).  I also had a strict deadline of 3 days, in addition to all the other responsibilities I had, so no time to learn them.  And seeing how screwed up the rest of the process was: Payment files from multiple vendors in multiple formats; Sent via FTP, PGP encrypted email, or some other wizardry; Manually opened/downloaded and saved to a particular set of folders (couldn't change this); Once processed, had to be placed BACK in the same folders with the original archived; x2 divisions that had to run separately; Plus an additional vendor file in another format on a completely different schedule; So that they could be MANUALLY uploaded into the AS/400 system (couldn't change this either, even if it was technically possible) I didn't feel so bad about the solution I came up with, which was naturally: Copy the payment files to the local SQL Server drives, using xp_cmdshell Run batch files (via xp_cmdshell) to parse the different formats using sed, a Unix utility (this was before Powershell) Use other Unix utilities (join, split, grep, wc) to process parsed files and generate metadata (size, date, checksum, line count) Run sqlcmd to execute a stored procedure that passed the parsed file names so it would bulk load the data to do a comparison bcp the compared data out to ANOTHER text file so that I could grep that data out of the original file Run another stored procedure to import the matched data into SQL Server so it could process the payments, including file metadata Process payment batches and log which division and vendor they belong to Email the payment details to the finance group (since it was too hard for them to run a web report with the same data…which they ran anyway to compare the emailed file against…which always matched, surprisingly) Email another report showing unmatched payments so they could manually void them…about 3 months afterward All in "Excel" format, using xp_sendmail (SQL 2000 system) Copy the unmatched data back to the original folder locations, making sure to match the file format exactly (if you've ever worked with ACH files, you'll understand why this sucked) If you're one of the 10 people who have read my blog before, you know that I love the DOS "for" command.  Like passionately.  Like fairy-tale love.  So my batch files were riddled with for loops, nested within other for loops, that called other batch files containing for loops.  I think there was one section that had 4 or 5 nested for commands.  It was wrong, disturbed, and completely un-maintainable by anyone, even myself.  Months, even a year, after I left the company I got calls from someone who had to make a minor change to it, and they called me to talk them out of spraying the office with an AK-47 after looking at this code.  (for you Star Trek TOS fans) The funniest part of this, well, one of the funniest, is that I made the deadline…sort of, I was only a day late…and the DAMN THING WORKED practically unchanged for 3 years.  Most of the problems came from the manual parts of the overall process, like forgetting to decrypt the files, or missing/late files, or saved to the wrong folders.  I'm definitely not trying to toot my own horn here, because this was truly one of the dumbest, crappiest solutions I ever came up with.  Fortunately as far as I know it's no longer in use and someone has written a proper replacement.  Today I would knuckle down and do it in SSIS or Powershell, even if it took me weeks to get it right. The real lesson from this crap code is to make things MAINTAINABLE and UNDERSTANDABLE.  sed scripting regular expressions doesn't fit that criteria in any way.  If you ever find yourself under pressure to do something fast at all costs, DON'T DO IT.  Stop and consider long-term maintainability, not just for yourself but for others on your team.  If you can't explain the basic approach in under 5 minutes, it ultimately won't succeed.  And while you may love to leave all that crap behind, it may follow you anyway, and you'll step in it again.   P.S. - if you're wondering about all the manual stuff that couldn't be changed, it was because the entire process had gone through Six Sigma, and was deemed the best possible way.  Phew!  Talk about stink!

    Read the article

  • Better documentation for tasks waiting on resources

    - by SQLOS Team
    The sys.dm_os_waiting_tasks DMV contains a wealth of useful information about tasks waiting on a resource, but until now detailed information about the resource being consumed - sys.dm_os_waiting_tasks.resource_description - hasn't been documented, apart from a rather self-evident "Description of the resource that is being consumed."   Thanks to a recent Connect suggestion this column will get more information added. Here is a summary of the possible values that can appear in this column - Note this information is current for SQL Server 2008 R2 and Denali:   Thread-pool resource owner:•       threadpool id=scheduler<hex-address> Parallel query resource owner:•       exchangeEvent id={Port|Pipe}<hex-address> WaitType=<exchange-wait-type> nodeId=<exchange-node-id> Exchange-wait-type can be one of the following.•       e_waitNone•       e_waitPipeNewRow•       e_waitPipeGetRow•       e_waitSynchronizeConsumerOpen•       e_waitPortOpen•       e_waitPortClose•       e_waitRange Lock resource owner:<type-specific-description> id=lock<lock-hex-address> mode=<mode> associatedObjectId=<associated-obj-id>               <type-specific-description> can be:• For DATABASE: databaselock subresource=<databaselock-subresource> dbid=<db-id>• For FILE: filelock fileid=<file-id> subresource=<filelock-subresource> dbid=<db-id>• For OBJECT: objectlock lockPartition=<lock-partition-id> objid=<obj-id> subresource=<objectlock-subresource> dbid=<db-id>• For PAGE: pagelock fileid=<file-id> pageid=<page-id> dbid=<db-id> subresource=<pagelock-subresource>• For Key: keylock  hobtid=<hobt-id> dbid=<db-id>• For EXTENT: extentlock fileid=<file-id> pageid=<page-id> dbid=<db-id>• For RID: ridlock fileid=<file-id> pageid=<page-id> dbid=<db-id>• For APPLICATION: applicationlock hash=<hash> databasePrincipalId=<role-id> dbid=<db-id>• For METADATA: metadatalock subresource=<metadata-subresource> classid=<metadatalock-description> dbid=<db-id>• For HOBT: hobtlock hobtid=<hobt-id> subresource=<hobt-subresource> dbid=<db-id>• For ALLOCATION_UNIT: allocunitlock hobtid=<hobt-id> subresource=<alloc-unit-subresource> dbid=<db-id> <mode> can be:• Sch-S• Sch-M• S• U• X• IS• IU• IX• SIU• SIX• UIX• BU• RangeS-S• RangeS-U• RangeI-N• RangeI-S• RangeI-U• RangeI-X• RangeX-S• RangeX-U• RangeX-X External resource owner:•       External ExternalResource=<wait-type> Generic resource owner:•       TransactionMutex TransactionInfo Workspace=<workspace-id>•       Mutex•       CLRTaskJoin•       CLRMonitorEvent•       CLRRWLockEvent•       resourceWait Latch resource owner:•       <db-id>:<file-id>:<page-in-file>•       <GUID>•       <latch-class> (<latch-address>)   Further Information Slava Oks's weblog: sys.dm_os_waiting_tasks.Informit.com: Identifying Blocking Using sys.dm_os_waiting_tasks - Ken Henderson   - Guy

    Read the article

  • m2eclipse resource filtering

    - by drewzilla
    I've having problems with resource filtering using m2eclipse Maven support in Eclipse. It seems that filtering only takes place on resources that have changed. This is fundamentally flawed because, if I have a file that references properties (e.g. ${my.property}, if the value of the property changes, the filtering will only be performed if the referencing file is also modified - if I only change the property value (in my pom.xml), the filtering is not applied to the files that that reference it. So, if I make a change to a property in my pom file, the filtering is not applied. However, if I then go to the file that references that property (e.g. a Spring config file) then edit and save it, the filtering is applied. I did read somewhere that: "m2eclipse skips filtering if there were no resource changes during incremental build" I'm using m2eclipse 0.10.x Has anyone else come across this? Thanks, Andrew

    Read the article

  • Objective-C @class / import best practice

    - by Winder
    I've noticed that a lot of Objective-C examples will forward declare classes with @class, then actually import the class in the .m file with an import. I understand that this is considered a best practice, as explained in answers to question: http://stackoverflow.com/questions/322597/objective-c-class-vs-import Coming from C++ this feels backwards. I would normally include all needed .h files in the new classes header file. This seems useful since it would make the compiler generate a warning when two classes include each other, at which point I can decide whether this is a bad thing or not then use the same Objective-C style and forward declare the class in the header and include it in the .cpp file. What is the benefit of forward declaring @class and importing in the implementation file? Should it be a best practice in C++ to forward declare classes rather than including the header file? Or is it wrong to think of Objective-C and C++ in these similar terms to begin with?

    Read the article

  • Does binarywriter.flush() also flush the underlying filestream object?

    - by jacob sebastian
    I have got a code snippet as follows: Dim fstream = new filestream(some file here) dim bwriter = new binarywriter(fstream) while not end of file read from source file bwriter.write() bwriter.flush() end while The question I have is the following. When I call bwriter.flush() does it also flush the fstream object? Or should I have to explicitly call fstream.flush() such as given in the following example: while not end of file read from source file bwriter.write() bwriter.flush() fstream.flush() end while A few people suggested that I need to call fstream.flush() explicitly to make sure that the data is written to the disk (or the device). However, my testing shows that the data is written to the disk as soon as I call flush() method on the bwriter object. Can some one confirm this?

    Read the article

  • Some notes on Reflector 7

    - by CliveT
    Both Bart and I have blogged about some of the changes that we (and other members of the team) have made to .NET Reflector for version 7, including the new tabbed browsing model, the inclusion of Jason Haley's PowerCommands add-in and some improvements to decompilation such as handling iterator blocks. The intention of this blog post is to cover all of the main new features in one place, and to describe the three new editions of .NET Reflector 7. If you'd simply like to try out the latest version of the beta for yourself you can do so here. Three new editions .NET Reflector 7 will come in three new editions: .NET Reflector .NET Reflector VS .NET Reflector VSPro The first edition is just the standalone Windows application. The latter two editions include the Windows application, but also add the power of Reflector into Visual Studio so that you can save time switching tools and quickly get to the bottom of a debugging issue that involves third-party code. Let's take a look at some of the new features in each edition. Tabbed browsing .NET Reflector now has a tabbed browsing model, in which the individual tabs have independent histories. You can open a new tab to view the selected object by using CTRL+CLICK. I've found this really useful when I'm investigating a particular piece of code but then want to focus on some other methods that I find along the way. For version 7, we wanted to implement the basic idea of tabs to see whether it is something that users will find helpful. If it is something that enhances productivity, we will add more tab-based features in a future version. PowerCommands add-in We have also included Jason Haley's PowerCommands add-in as part of version 7. This add-in provides a number of useful commands, including support for opening .xap files and extracting the constituent assemblies, and a query editor that allows C# queries to be written and executed against the Reflector object model . All of the PowerCommands features can be turned on from the options menu. We will be really interested to see what people are finding useful for further integration into the main tool in the future. My personal favourite part of the PowerCommands add-in is the query editor. You can set up as many of your own queries as you like, but we provide 25 to get you started. These do useful things like listing all extension methods in a given assembly, and displaying other lower-level information, such as the number of times that a given method uses the box IL instruction. These queries can be extracted and then executed from the 'Run Query' context menu within the assembly explorer. Moreover, the queries can be loaded, modified, and saved using the built-in editor, allowing very specific user customization and sharing of queries. The PowerCommands add-in contains many other useful utilities. For example, you can open an item using an external application, work with enumeration bit flags, or generate assembly binding redirect files. You can see Bart's earlier post for a more complete list. .NET Reflector VS .NET Reflector VS adds a brand new Reflector object browser into Visual Studio to save you time opening .NET Reflector separately and browsing for an object. A 'Decompile and Explore' option is also added to the context menu of references in the Solution Explorer, so you don't need to leave Visual Studio to look through decompiled code. We've also added some simple navigation features to allow you to move through the decompiled code as quickly and easily as you can in .NET Reflector. When this is selected, the add-in decompiles the given assembly, Once the decompilation has finished, a clone of the Reflector assembly explorer can be used inside Visual Studio. When Reflector generates the source code, it records the location information. You can therefore navigate from the source file to other decompiled source using the 'Go To Definition' context menu item. This then takes you to the definition in another decompiled assembly. .NET Reflector VSPro .NET Reflector VSPro builds on the features in .NET Reflector VS to add the ability to debug any source code you decompile. When you decompile with .NET Reflector VSPro, a matching .pdb is generated, so you can use Visual Studio to debug the source code as if it were part of the project. You can now use all the standard debugging techniques that you are used to in the Visual Studio debugger, and step through decompiled code as if it were your own. Again, you can select assemblies for decompilation. They are then decompiled. And then you can debug as if they were one of your own source code files. The future of .NET Reflector As I have mentioned throughout this post, most of the new features in version 7 are exploratory steps and we will be watching feedback closely. Although we don't want to speculate now about any other new features or bugs that will or won't be fixed in the next few versions of .NET Reflector, Bart has mentioned in a previous post that there are lots of improvements we intend to make. We plan to do this with great care and without taking anything away from the simplicity of the core product. User experience is something that we pride ourselves on at Red Gate, and it is clear that Reflector is still a long way off our usual standards. We plan for the next few versions of Reflector to be worked on by some of our top usability specialists who have been involved with our other market-leading products such as the ANTS Profilers and SQL Compare. I re-iterate the need for the really great simple mode in .NET Reflector to remain intact regardless of any other improvements we are planning to make. I really hope that you enjoy using some of the new features in version 7 and that Reflector continues to be your favourite .NET development tool for a long time to come.

    Read the article

  • Reading multiple files with PHPExcel

    - by KaOSoFt
    Hello there. I just recently started using this library (the one from CodePlex), but I ran into some issues. My goal is to use it so I can process some data from multiple Excel files, and send such data to a database, per file. I'm doing something like: foreach( $file_list as $file ) { $book = PHPExcel_IOFactory::load( $path . $file ); } So, inside the foreach I'm (for now) just showing the data to the user, but after five files, I get a memory error: Fatal error: Allowed memory size of 33554432 bytes exhausted (tried to allocate 50688 bytes) in /var/www/test/classes/PHPExcel/Shared/OLERead.php on line 76 Is there a way to __destruct the object after each file is loaded, so space is reserved (made free) for the next file, instead of accumulating it, or do you rather know of a reason and work-around for this? Please let me know any suggestions you have. Thanks in advance.

    Read the article

  • Send files using Content-Disposition: attachment in parallel

    - by Manos Dilaverakis
    I have a PHP page that sends a file to the browser depending on the request data it receives. getfile.php?get=something sends file A, getfile.php?get=somethingelse sends file B and so on and so forth. This is done like so: header('Content-Disposition: attachment; filename='. urlencode($filename)); readfile($fileURL); It works except it can only send one file at a time. Any other files requested are send in linear fashion. One starts as soon as another finishes. How can I get this to send files in parallel if the user requests another file while one is downloading?

    Read the article

  • How-to limit feature functionality

    - by ph0enix
    Are there any standard or "best practice" ways of limiting feature functionality for a particular application? Example: We have a product with a variety of features, and our customers can pick and choose which features they would like to use, and the cost of the product varies based on which features they are actually using. In the past, we have distributed along with our software installer an encrypted license file that contains information about the customer, as well as the collection of features that they have enabled. In code, we read from the license file and enable the functionality according to the license file. This seems to work fine, except there a few disadvantages: Upgrading users with new functionality can be sort of a pain If a particular feature shows up in multiple places throughout the application, a developer might not realize that this feature should be licensed, and forget to check the license file before granting functionality to the user If the license file becomes corrupted, deleted, moved, renamed, etc. the application will not run We're getting ready to roll out a new set of features, and I was just curious what others in the community have done to tackle this problem?

    Read the article

  • Get directory contents in date modified order

    - by nevan
    Is there an method to get the contents of a folder in a particular order? I'd like an array of file attribute dictionaries (or just file names) ordered by date modified. Right now, I'm doing it this way: get an array with the file names get the attributes of each file store the file's path and modified date in a dictionary with the date as a key Next I have to output the dictionary in date order, but I wondered if there's an easier way? If not, is there a code snippet somewhere which will do this for me? Thanks.

    Read the article

  • very quickly getting total size of folder

    - by freakazo
    I want to quickly find the total size of any folder using python. def GetFolderSize(path): TotalSize = 0 for item in os.walk(path): for file in item[2]: try: TotalSize = TotalSize + getsize(join(item[0], file)) except: print("error with file: " + join(item[0], file)) return TotalSize That's the simple script I wrote to get the total size of the folder, it took around 60 seconds (+-5 seconds). By using multiprocessing I got it down to 23 seconds on a quad core machine. Using the Windows file explorer it takes only ~3 seconds (Right click- properties to see for yourself). So is there a faster way of finding the total size of a folder close to the speed that windows can do it? Windows 7, python 2.6 (Did searches but most of the time people used a very similar method to my own) Thanks in advance.

    Read the article

  • mod_rewrite apache

    - by Peter
    is there any way to hide redirected url, here is what I think: RewriteEngine On RewriteCond %{REQUEST_FILENAME} -f RewriteRule ^(.*)$ http://minteddomain.com/mint/pepper/tillkruess/downloads/tracker.php?url=http://%{HTTP_HOST}%{REQUEST_URI}&force So the long redirected url http://minteddomain.com/mint/pepper/tillkruess/downloads/tracker.php?url=http://%{HTTP_HOST}%{REQUEST_URI} to something shorter like /mintedomain.com/track/ It is possible? Adrian edit: Andrew: This is a stats software Mint (haveamint.com) with File Download tracker plugin. The File Download tracker works in this way: in .htaccess every file (zip, rar, txt,...) is redirected to the tracker.php file (because the stats): http://mydomain.com/mint/pepper/tillkruess/downloads/tracker.php?url=http://%{HTTP_HOST}%{REQUEST_URI} So the redirected url look like this for a zip file: http://minteddomain.com/mint/pepper/tillkruess/downloads/tracker.php?url=http://mydomain/downloads/apple.zip This redirected URL is very long and ugly. The best for me would be to redirect this redirected URL to something shorter URL: example: http://mydomain.com/track/downloads/apple.zip.. So the http://mydomain.com/track would be the http://minteddomain.com/mint/pepper/tillkruess/downloads/tracker.php

    Read the article

  • EXcel VBA : Excel Macro to create table in a PowerPoint

    - by Balaji.N.S
    Hi friends, My requirement is I have a Excel which contains some data. I would like to select some data from the excel and open a PowerPoint file and Create Table in PowerPoint and populate the data in to it Right now I have succeeded in collecting the data from excel opening a PowerPoint file through Excel VBA Code. Code for Opening the PowerPoint from Excel. Set objPPT = CreateObject("Powerpoint.application") objPPT.Visible = True Dim file As String file = "C:\Heavyhitters_new.ppt" Set pptApp = CreateObject("PowerPoint.Application") Set pptPres = pptApp.Presentations.Open(file) Now how do I create the table in PowerPoint from Excel and populate the data. Timely help will be very much appreciated. Thanks in advance,

    Read the article

  • How to Use RDA to Generate WLS Thread Dumps At Specified Intervals?

    - by Daniel Mortimer
    Introduction There are many ways to generate a thread dump of a WebLogic Managed Server. For example, take a look at: Taking Thread Dumps - [an excellent blog post on the Middleware Magic site]or  Different ways to take thread dumps in WebLogic Server (Document 1098691.1) There is another method - use Remote Diagnostic Agent! The solution described below is not documented, but it is relatively straightforward to execute. One advantage of using RDA to collect the thread dumps is RDA will also collect configuration, log files, network, system, performance information at the same time. Instructions 1. Not familiar with Remote Diagnostic Agent? Take a look at my previous blog "Resolve SRs Faster Using RDA - Find the Right Profile" 2. Choose a profile, which includes the WebLogic Server data collection modules (for example the profile "WebLogicServer"). At RDA setup time you should see the prompt below: ------------------------------------------------------------------------------- S301WLS: Collects Oracle WebLogic Server Information ------------------------------------------------------------------------------- Enter the location of the directory where the domains to analyze are located (For example in UNIX, <BEA Home>/user_projects/domains or <Middleware Home>/user_projects/domains) Hit 'Return' to accept the default (/oracle/11AS/Middleware/user_projects/domains) > For a successful WLS connection, ensure that the domain Admin Server is up and running. Data Collection Type:   1  Collect for a single server (offline mode)   2  Collect for a single server (using WLS connection)   3  Collect for multiple servers (using WLS connection) Enter the item number Hit 'Return' to accept the default (1) > 2 Choose option 2 or 3. Note: Collect for a single server or multiple servers using WLS connection means that RDA will attempt to connect to execute online WLST commands against the targeted server(s). The thread dumps are collected using the WLST function - "threadDumps()". If WLST cannot connect to the managed server, RDA will proceed to collect other data and ignore the request to collect thread dumps. If in the final output you see no Thread Dump menu item, then it's likely that the managed server is in a state which prevents new connections to it. If faced with this scenario, you would have to employ alternative methods for collecting thread dumps. 3. The RDA setup will create a setup.cfg file in the RDA_HOME directory. Open this file in an editor. You will find the following parameters which govern the number of thread dumps and thread dump interval. #N.Number of thread dumps to capture WREQ_THREAD_DUMP=10 #N.Thread dump interval WREQ_THREAD_DUMP_INTERVAL=5000 The example lines above show the default settings. In other words, RDA will collect 10 thread dumps at 5000 millisecond (5 second) intervals. You may want to change this to something like: #N.Number of thread dumps to capture WREQ_THREAD_DUMP=10 #N.Thread dump interval WREQ_THREAD_DUMP_INTERVAL=30000 However, bear in mind, that such change will increase the total amount of time it takes for RDA to complete its run. 4. Once you are happy with the setup.cfg, run RDA. RDA will collect, render, generate and package all files in the output directory. 5. For ease of viewing, open up the RDA Start html file - "xxxx__start.htm". The thread dumps can be found under the WLST Collections for the target managed server(s). See screenshots belowScreenshot 1:RDA Start Page - Main Index Screenshot 2: Managed Server Sub Index Screenshot 3: WLST Collections Screenshot 4: Thread Dump Page - List of dump file links Screenshot 5: Thread Dump Dat File Link Additional Comment: A) You can view the thread dump files within the RDA Start Page framework, but most likely you will want to download the dat files for in-depth analysis via thread dump analysis tools such as: Thread Dump Analyzer -  Samurai - a GUI based tail , thread dump analysis tool If you are new to thread dump analysis - take a look at this recorded Support Advisor Webcast  Oracle WebLogic Server: Diagnosing Performance Issues through Java Thread Dumps[Slidedeck from webcast in PDF format]B) I have logged a couple of enhancement requests for the RDA Development Team to consider: Add timestamp to dump file links, dat filename and at the top of the body of the dat file Package the individual thread dumps in a zip so all dump files can be conveniently downloaded in one go.

    Read the article

  • How does opendir work in Perl 6?

    - by sid_com
    Hello! Can someone tell me, why the "opendir" doesn't work? #!/usr/bin/env perl6 use v6; my $file = 'Dokumente/test_file'; if ( my $fh = open $file, :r ) { for $fh.lines -> $line { say $line; } } else { say "Could not open '$file'"; } my $dir = 'Dokumente'; my $dh = opendir $dir err die "Could not open $dir: $!"; Output: Hello, World! Line 2. Last line. Could not find non-existent sub &opendir current instr.: '_block14' pc 29 (EVAL_1:0) called from Sub '!UNIT_START' pc 1163 (src/glue/run.pir:20) called from Sub 'perl6;PCT;HLLCompiler;eval' pc -1 ((unknown file):-1) called from Sub 'perl6;PCT;HLLCompiler;evalfiles' pc 1303 (compilers/pct/src/PCT/HLLCompiler.pir:707) called from Sub 'perl6;PCT;HLLCompiler;command_line' pc 1489 (compilers/pct/src/PCT/HLLCompiler.pir:794) called from Sub 'perl6;Perl6;Compiler;main' pc -1 ((unknown file):-1)

    Read the article

  • importing an existing x509 certificate and private key in Java keystore to use in ActiveMQ ssl context

    - by Aleksandar Ivanisevic
    I have this in activemq config <sslContext> <sslContext keyStore="file:/home/alex/work/amq/broker.ks" keyStorePassword="password" trustStore="file:${activemq.base}/conf/broker.ts" trustStorePassword="password"/> </sslContext> I have a pair of x509 cert and a key file How do I import those two to be used in ssl and ssl+stomp connectors? All examples i could google always generate the key themselves, but I already have a key. I have tried keytool -import -keystore ./broker.ks -file mycert.crt but this only imports the certificate and not the key file and results in 2009-05-25 13:16:24,270 [localhost:61612] ERROR TransportConnector - Could not accept connection : No available certificate or key corresponds to the SSL cipher suites which are enabled. I have tried concatenating the cert and the key but got the same result How do I import the key?

    Read the article

  • Generating a reasonable ctags database for Boost

    - by Robert S. Barnes
    I'm running Ubuntu 8.04 and I ran the command: $ ctags -R --c++-kinds=+p --fields=+iaS --extra=+q -f ~/.vim/tags/stdlibcpp /usr/include/c++/4.2.4/ to generate a ctags database for the standard C++ library and STL ( libstdc++ ) on my system for use with the OmniCppComplete vim script. This gave me a very reasonable 4MB tags file which seems to work fairly well. However, when I ran the same command against the installed Boost headers: $ ctags -R --c++-kinds=+p --fields=+iaS --extra=+q -f ~/.vim/tags/boost /usr/include/boost/ I ended up with a 1.4 GB tags file! I haven't tried it yet, but that seems likes it's going to be too large to be useful. Is there a way to get a slimmer, more usable tags file for my installed Boost headers? Edit Just as a note, libstdc++ includes TR1, which has allot of Boost libs in it. So there must be something weird going on for libstdc++ to come out with a 4 MB tags file and Boost to end up with a 1.4 GB tags file.

    Read the article

  • Python 2.5.2: trying to open files recursively

    - by user248959
    Hi, the script below should open all the files inside the folder 'pruebaba' recursively but i get this error: Traceback (most recent call last): File "/home/tirengarfio/Desktop/prueba.py", line 8, in f = open(file,'r') IOError: [Errno 21] Is a directory This is the hierarchy: pruebaba folder1 folder11 test1.php folder12 test1.php test2.php folder2 test1.php The script: import re,fileinput,os path="/home/tirengarfio/Desktop/pruebaba" os.chdir(path) for file in os.listdir("."): f = open(file,'r') data = f.read() data = re.sub(r'(\s*function\s+.*\s*{\s*)', r'\1echo "The function starts here."', data) f.close() f = open(file, 'w') f.write(data) f.close() Any idea? Regards Javi

    Read the article

  • Problem with Variable Scoping in Rebol's Object

    - by Rebol Tutorial
    I have modified the rebodex app so that it can be called from rebol's console any time by typing rebodex. To show the title of the app, I need to store it in app-title: system/script/header/title so tha it could be used later in view/new/title dex reform [self/app-title version] That works but as you can see I have named the var name "app-title", but if I use "title" instead, the window caption would show weird stuff (vid code). Why ? REBOL [ Title: "Rebodex" Date: 23-May-2010 Version: 2.1.1 File: %rebodex.r Author: "Carl Sassenrath" Modification: "Rebtut" Purpose: "A simple but useful address book contact database." Email: %carl--rebol--com library: [ level: 'intermediate platform: none type: 'tool domain: [file-handling DB GUI] tested-under: none support: none license: none see-also: none ] ] rebodex.context: context [ app-title: system/script/header/title version: system/script/header/version set 'rebodex func[][ names-path: %names.r ;data file name-list: none fields: [name company title work cell home car fax web email smail notes updat] names: either exists? names-path [load names-path][ [[name "Carl Sassenrath" title "Founder" company "REBOL Technologies" email "%carl--rebol--com" web "http://www.rebol.com"]] ] brws: [ if not empty? web/text [ if not find web/text "http://" [insert web/text "http://"] error? try [browse web/text] ] ] dial: [request [rejoin ["Dial number for " name/text "? (Not implemented.)"] "Dial" "Cancel"]] dex-styles: stylize [ lab: label 60x20 right bold middle font-size 11 btn: button 64x20 font-size 11 edge [size: 1x1] fld: field 200x20 font-size 11 middle edge [size: 1x1] inf: info font-size 11 middle edge [size: 1x1] ari: field wrap font-size 11 edge [size: 1x1] with [flags: [field tabbed]] ] dex-pane1: layout/offset [ origin 0 space 2x0 across styles dex-styles lab "Name" name: fld bold return lab "Title" title: fld return lab "Company" company: fld return lab "Email" email: fld return lab "Web" brws web: fld return lab "Address" smail: ari 200x72 return lab "Updated" updat: inf 200x20 return ] 0x0 updat/flags: none dex-pane2: layout/offset [ origin 0 space 2x0 across styles dex-styles lab "Work #" dial work: fld 140 return lab "Home #" dial home: fld 140 return lab "Cell #" dial cell: fld 140 return lab "Alt #" dial car: fld 140 return lab "Fax #" fax: fld 140 return lab "Notes" notes: ari 140x72 return pad 136x1 btn "Close" #"^q" [store-entry save-file unview] ] 0x0 dex: layout [ origin 8x8 space 0x1 styles dex-styles srch: fld 196x20 bold across rslt: list 180x150 [ nt: txt 178x15 middle font-size 11 [ store-entry curr: cnt find-name nt/text update-entry unfocus show dex ] ] supply [ cnt: count + scroll-off face/text: "" face/color: snow if not n: pick name-list cnt [exit] face/text: select n 'name face/font/color: black if curr = cnt [face/color: system/view/vid/vid-colors/field-select] ] sl: slider 16x150 [scroll-list] return return btn "New" #"^n" [new-name] btn "Del" #"^d" [delete-name unfocus update-entry search-all show dex] btn "Sort" [sort names sort name-list show rslt] return at srch/offset + (srch/size * 1x0) bx1: box dex-pane1/size bx2: box dex-pane2/size return ] bx1/pane: dex-pane1/pane bx2/pane: dex-pane2/pane rslt/data: [] this-name: first names name-list: copy names curr: none search-text: "" scroll-off: 0 srch/feel: make srch/feel [ redraw: func [face act pos][ face/color: pick face/colors face system/view/focal-face if all [face = system/view/focal-face face/text search-text] [ search-text: copy face/text search-all if 1 = length? name-list [this-name: first name-list update-entry show dex] ] ] ] update-file: func [data] [ set [path file] split-path names-path if not exists? path [make-dir/deep path] write names-path data ] save-file: has [buf] [ buf: reform [{REBOL [Title: "Name Database" Date:} now "]^/[^/"] foreach n names [repend buf [mold n newline]] update-file append buf "]" ] delete-name: does [ remove find/only names this-name if empty? names [append-empty] save-file new-name ] clean-names: function [][n][ forall names [ if any [empty? first names none? n: select first names 'name empty? n][ remove names ] ] names: head names ] search-all: function [] [ent flds] [ clean-names clear name-list flds: [name] either empty? search-text [insert name-list names][ foreach nam names [ foreach word flds [ if all [ent: select nam word find ent search-text][ append/only name-list nam break ] ] ] ] scroll-off: 0 sl/data: 0 resize-drag scroll-list curr: none show [rslt sl] ] new-name: does [ store-entry clear-entry search-all append-empty focus name ; update-entry ] append-empty: does [append/only names this-name: copy []] find-name: function [str][] [ foreach nam names [ if str = select nam 'name [ this-name: nam break ] ] ] store-entry: has [val ent flag] [ flag: 0 if not empty? trim name/text [ foreach word fields [ val: trim get in get word 'text either ent: select this-name word [ if ent val [insert clear ent val flag: flag + 1] ][ if not empty? val [repend this-name [word copy val] flag: flag + 1] ] if flag = 1 [flag: 2 updat/text: form now] ] if not zero? flag [save-file] ] ] update-entry: does [ foreach word fields [ insert clear get in get word 'text any [select this-name word ""] ] show rslt ] clear-entry: does [ clear-fields bx1 clear-fields bx2 updat/text: form now unfocus show dex ] show-names: does [ clear rslt/data foreach n name-list [ if n/name [append rslt/data n/name] ] show rslt ] scroll-list: does [ scroll-off: max 0 to-integer 1 + (length? name-list) - (100 / 16) * sl/data show rslt ] do resize-drag: does [sl/redrag 100 / max 1 (16 * length? name-list)] center-face dex new-name focus srch show-names view/new/title dex reform [app-title version] insert-event-func [ either all [event/type = 'close event/face = dex][ store-entry unview ][event] ] do-events ] ]

    Read the article

  • C# Process flow - Datastream, XML and datagrid

    - by Farstucker
    Im looking for some advice/suggestions on how I should setup the work flow of a small application Im building. When the application is launched the datagrid will be populated via the XML file. Once running the application will receive a datastream that I hope to update the file and datagrid. So Im curious what you would suggest on how I setup the workflow (ie, split the data from the data stream and simultaneously populate the file and grid or would you suggest populating the XML file first and setting up a timer to have the grid read the file?) Im really looking for optimal performance.

    Read the article

< Previous Page | 822 823 824 825 826 827 828 829 830 831 832 833  | Next Page >