Search Results

Search found 20360 results on 815 pages for 'capture output'.

Page 155/815 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • why is LZMA SDK (7-zip) so slow

    - by Tono Nam
    I found 7-zip great and I will like to use it on .net applications. I have a 10MB file (a.001) and it takes: 2 seconds to encode. Now it will be nice if I could do the same thing on c#. I have downloaded http://www.7-zip.org/sdk.html LZMA SDK c# source code. I basically copied the CS directory into a console application in visual studio: Then I compiled and eveything compiled smoothly. So on the output directory I placed the file a.001 which is 10MB of size. On the main method that came on the source code I placed: [STAThread] static int Main(string[] args) { // e stands for encode args = "e a.001 output.7z".Split(' '); // added this line for debug try { return Main2(args); } catch (Exception e) { Console.WriteLine("{0} Caught exception #1.", e); // throw e; return 1; } } when I execute the console application the application works great and I get the output a.7z on the working directory. The problem is that it takes so long. It takes about 15 seconds to execute! I have also tried http://stackoverflow.com/a/8775927/637142 approach and it also takes very long. Why is it 10 times slower than the actual program ? Also Even if I set to use only one thread: It still takes much less time (3 seconds vs 15): (Edit) Another Possibility Could it be because C# is slower than assembly or C ? I notice that the algorithm does a lot of heavy operations. For example compare these two blocks of code. They both do the same thing: C void main() { time_t now; int i,j,k,x; long counter ; counter = 0; now = time(NULL); /* LOOP */ for(x=0; x<10; x++) { counter = -1234567890 + x+2; for (j = 0; j < 10000; j++) for(i = 0; i< 1000; i++) for(k =0; k<1000; k++) { if(counter > 10000) counter = counter - 9999; else counter= counter +1; } printf (" %d \n", time(NULL) - now); // display elapsed time } printf("counter = %d\n\n",counter); // display result of counter printf ("Elapsed time = %d seconds ", time(NULL) - now); gets("Wait"); } output c# static void Main(string[] args) { DateTime now; int i, j, k, x; long counter; counter = 0; now = DateTime.Now; /* LOOP */ for (x = 0; x < 10; x++) { counter = -1234567890 + x + 2; for (j = 0; j < 10000; j++) for (i = 0; i < 1000; i++) for (k = 0; k < 1000; k++) { if (counter > 10000) counter = counter - 9999; else counter = counter + 1; } Console.WriteLine((DateTime.Now - now).Seconds.ToString()); } Console.Write("counter = {0} \n", counter.ToString()); Console.Write("Elapsed time = {0} seconds", DateTime.Now - now); Console.Read(); } Output Note how much slower was c#. Both programs where run from outside visual studio on release mode. Maybe that is the reason why it takes so much longer in .net than on c++. Conclusion I cannot seem to know what is causing the problem. I guess I will use 7z.dll and invoke the necessary methods from c#. A library that does that is at: http://sevenzipsharp.codeplex.com/ and that way I am using the same library that 7zip is using as: // dont forget to add reference to SevenZipSharp located on the link I provided static void Main(string[] args) { // load the dll SevenZip.SevenZipCompressor.SetLibraryPath(@"C:\Program Files (x86)\7-Zip\7z.dll"); SevenZip.SevenZipCompressor compress = new SevenZip.SevenZipCompressor(); compress.CompressDirectory("MyFolderToArchive", "output.7z"); }

    Read the article

  • Visual Studio 2010 Beta start without debugging Console wont stay open

    - by CousinVinny
    I want to keep the console (output) window open to view the output of the project I am working on. I have enabled the start without debugging option and added the button to the debugging toolbar, but alas it fails to keep it open nicely like in visual studio 2008. I have to add cin.getline() etc etc etc to get it to stay open, but I don't want to type it. Any suggestions as to alter it Visual Studio 2010 to keep it open, or any debugging tricks to make it easier to view output for longer than a flash. Visual Studio used to have a "Press any key to continue prompt" I want it back...

    Read the article

  • WCF CreateMessage from custom body xml

    - by Cecil
    I have the following code: string body = "<custom xml>"; XDocument doc = XDocument.Parse(body); MemoryStream stream = new MemoryStream(); XmlWriter writer = XmlWriter.Create(stream); if (writer != null) { doc.Save(writer); writer.Flush(); writer.Close(); } stream.Position = 0; XmlReader rd = XmlReader.Create(stream); Message output = Message.CreateMessage(msg.Version, msg.Headers.Action, rd); output.Headers.CopyHeadersFrom(msg); output.Properties.CopyProperties(msg.Properties); When I try to use the message I get the following error: hexadecimal value 0x02, is an invalid character. Line 1, position 2. Any idea why? And what I can do to fix this?

    Read the article

  • How do I use local memory in OpenCL?

    - by splicer
    I've been playing with OpenCL recently, and I'm able to write simple kernels that use only global memory. Now I'd like to start using local memory, but I can't seem to figure out how to use get_local_size() and get_local_id() to compute one "chunk" of output at a time. For example, let's say I wanted to convert Apple's OpenCL Hello World example kernel to something the uses local memory. How would you do it? Here's the original kernel source: __kernel square( __global float *input, __global float *output, const unsigned int count) { int i = get_global_id(0); if (i < count) output[i] = input[i] * input[i]; } If this example can't easily be converted into something that shows how to make use of local memory, any other simple example will do. Thanks!

    Read the article

  • Associate mount points with local disks in VBscript / WMI

    - by Mark Unwin
    I have a VBscript that outputs various config items about a system. Both hardware and software. I can output disks and their associated partitions. I can output mount points. I do not seem to be able to associate a mount point with a local disk (where it actually is a local disk). I need to be able to do this using VBscript, so as to fit in with the rest of the ~2000 lines of code. I do NOT want to run some other program graphically. I know the disk manager service can show me (My Computer - Manage - Disk Management), but this is not what I need. I need to be able to do this remotely via VBscript. I am open to running an .exe from the VBscript and piping the output back into VBscript and massaging it from there. Any idea's ? Thanks in advance.

    Read the article

  • Forcing a mixed ISO-8859-1 and UTF-8 multi-line string into UTF-8

    - by knorv
    Consider the following problem: A multi-line string $junk contains some lines which are encoded in UTF-8 and some in ISO-8859-1. I don't know a priori which lines are in which encoding, so heuristics will be needed. I want to turn $junk into pure UTF-8 with proper re-encoding of the ISO-8859-1 lines. Also, in the event of errors in the processing I want to provide a "best effort result" rather than throwing an error. My current attempt looks like this: $junk = &force_utf8($junk); sub force_utf8 { my $input = shift; my $output = ''; foreach my $line (split(/\n/, $input)) { if (utf8::valid($line)) { utf8::decode($line); } $output .= "$line\n"; } return $output; } While this appears to work I'm certain this is not the optimal solution. How would you improve my force_utf8(...) sub?

    Read the article

  • Filter a model's attributes before outputting as json

    - by Jorge Israel Peña
    Hey guys, I need to output my model as json and everything is going fine. However, some of the attributes need to be 'beautified' by filtering them through some helper methods, such as number_to_human_size. How would I go about doing this? In other words, say that I have an attribute named bytes and I want to pass it through number_to_human_size and have that result be output to json. I would also like to 'trim' what gets output as json if that's possible, since I only need some of the attributes. Is this possible? Can someone please give me an example? I would really appreciate it.

    Read the article

  • Problem showing modelstate errors while using RenderPartialToString

    - by Martin
    Im using the following code: public string RenderPartialToString(ControllerContext context, string partialViewName, ViewDataDictionary viewData, TempDataDictionary tempData) { ViewEngineResult result = ViewEngines.Engines.FindPartialView(context, partialViewName); if (result.View != null) { StringBuilder sb = new StringBuilder(); using (StringWriter sw = new StringWriter(sb)) { using (HtmlTextWriter output = new HtmlTextWriter(sw)) { ViewContext viewContext = new ViewContext(context, result.View, viewData, tempData, output); result.View.Render(viewContext, output); } } return sb.ToString(); } return String.Empty; } To return a partial view and a form through JSON. It works as it should, but as soon as I get modelstate errors my ValidationSummary does not show. The JSON only return the default form but it does not highlight the validation errors or show the validation summary. Am I missing something? This is how I call the RenderPartialToString: string partialView = RenderPartialToString(this.ControllerContext, "~/Areas/User/Views/Account/ChangeAccountDetails.ascx", new ViewDataDictionary(avd), new TempDataDictionary());

    Read the article

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Relative Path To Absolute Path in VB .NET

    - by Mehdi Anis
    Hi, I am writing a VB .NET console app where it spits takes relative path and spits out all file name, or error for invalid input. I am havinf trouble getting PhysicalPath from RelativePath Example: ` 1. I am in folder: C:\Documents and Settings\MehdiAnis.ULTIMATEBANGLA\My Documents\Visual Studio 2005\Projects\SP_Sol\SP_Proj\bin\Debug My App SP.exe is also in the same folder I run: "SP.exe ..\" OutPut will be a list of all files in the folder of "C:\Documents and Settings\MehdiAnis.ULTIMATEBANGLA\My Documents\Visual Studio 2005\Projects\SP_Sol\SP_Proj\bin" I run: "SP.exe ..\..\" OutPut will be a list of all files in the folder of "C:\Documents and Settings\MehdiAnis.ULTIMATEBANGLA\My Documents\Visual Studio 2005\Projects\SP_Sol\SP_Proj" I run: "SP.exe ..\..\..\" OutPut will be a list of all files in the folder of "C:\Documents and Settings\MehdiAnis.ULTIMATEBANGLA\My Documents\Visual Studio 2005\Projects\SP_Sol" ` Currently I am Handling 1 relative path, but not more than one: If Source.IndexOf("..\") = 0 Then Dim Sibling As String = Directory.GetParent(Directory.GetCurrentDirectory()).ToString()()) Source = Source.Replace("..\", Sibling) End If How can I easily handle multiple ..\ easily ? Thanks.

    Read the article

  • Using the HTML5 &lt;input type=&quot;file&quot; multiple=&quot;multiple&quot;&gt; Tag in ASP.NET

    - by Rick Strahl
    Per HTML5 spec the <input type="file" /> tag allows for multiple files to be picked from a single File upload button. This is actually a very subtle change that's very useful as it makes it much easier to send multiple files to the server without using complex uploader controls. Please understand though, that even though you can send multiple files using the <input type="file" /> tag, the process of how those files are sent hasn't really changed - there's still no progress information or other hooks that allow you to automatically make for a nicer upload experience without additional libraries or code. For that you will still need some sort of library (I'll post an example in my next blog post using plUpload). All the new features allow for is to make it easier to select multiple images from disk in one operation. Where you might have required many file upload controls before to upload several files, one File control can potentially do the job. How it works To create a file input box that allows with multiple file support you can simply do:<form method="post" enctype="multipart/form-data"> <label>Upload Images:</label> <input type="file" multiple="multiple" name="File1" id="File1" accept="image/*" /> <hr /> <input type="submit" id="btnUpload" value="Upload Images" /> </form> Now when the file open dialog pops up - depending on the browser and whether the browser supports it - you can pick multiple files. Here I'm using Firefox using the thumbnail preview I can easily pick images to upload on a form: Note that I can select multiple images in the dialog all of which get stored in the file textbox. The UI for this can be different in some browsers. For example Chrome displays 3 files selected as text next to the Browse… button when I choose three rather than showing any files in the textbox. Most other browsers display the standard file input box and display the multiple filenames as a comma delimited list in the textbox. Note that you can also specify the accept attribute in the <input> tag, which specifies a mime-type to specify what type of content to allow.Here I'm only allowing images (image/*) and the browser complies by just showing me image files to display. Likewise I could use text/* for all text formats registered on the machine or text/xml to only show XML files (which would include xml,xst,xsd etc.). Capturing Files on the Server with ASP.NET When you upload files to an ASP.NET server there are a couple of things to be aware of. When multiple files are uploaded from a single file control, they are assigned the same name. In other words if I select 3 files to upload on the File1 control shown above I get three file form variables named File1. This means I can't easily retrieve files by their name:HttpPostedFileBase file = Request.Files["File1"]; because there will be multiple files for a given name. The above only selects the first file. Instead you can only reliably retrieve files by their index. Below is an example I use in app to capture a number of images uploaded and store them into a database using a business object and EF 4.2.for (int i = 0; i < Request.Files.Count; i++) { HttpPostedFileBase file = Request.Files[i]; if (file.ContentLength == 0) continue; if (file.ContentLength > App.Configuration.MaxImageUploadSize) { ErrorDisplay.ShowError("File " + file.FileName + " is too large. Max upload size is: " + App.Configuration.MaxImageUploadSize); return View("UploadClassic",model); } var image = new ClassifiedsBusiness.Image(); var ms = new MemoryStream(16498); file.InputStream.CopyTo(ms); image.Entered = DateTime.Now; image.EntryId = model.Entry.Id; image.ContentType = "image/jpeg"; image.ImageData = ms.ToArray(); ms.Seek(0, SeekOrigin.Begin); // resize image if necessary and turn into jpeg Bitmap bmp = Imaging.ResizeImage(ms.ToArray(), App.Configuration.MaxImageWidth, App.Configuration.MaxImageHeight); ms.Close(); ms = new MemoryStream(); bmp.Save(ms,ImageFormat.Jpeg); image.ImageData = ms.ToArray(); bmp.Dispose(); ms.Close(); model.Entry.Images.Add(image); } This works great and also allows you to capture input from multiple input controls if you are dealing with browsers that don't support multiple file selections in the file upload control. The important thing here is that I iterate over the files by index, rather than using a foreach loop over the Request.Files collection. The files collection returns key name strings, rather than the actual files (who thought that was good idea at Microsoft?), and so that isn't going to work since you end up getting multiple keys with the same name. Instead a plain for loop has to be used to loop over all files. Another Option in ASP.NET MVC If you're using ASP.NET MVC you can use the code above as well, but you have yet another option to capture multiple uploaded files by using a parameter for your post action method.public ActionResult Save(HttpPostedFileBase[] file1) { foreach (var file in file1) { if (file.ContentLength < 0) continue; // do something with the file }} Note that in order for this to work you have to specify each posted file variable individually in the parameter list. This works great if you have a single file upload to deal with. You can also pass this in addition to your main model to separate out a ViewModel and a set of uploaded files:public ActionResult Edit(EntryViewModel model,HttpPostedFileBase[] uploadedFile) You can also make the uploaded files part of the ViewModel itself - just make sure you use the appropriate naming for the variable name in the HTML document (since there's Html.FileFor() extension). Browser Support You knew this was coming, right? The feature is really nice, but unfortunately not supported universally yet. Once again Internet Explorer is the problem: No shipping version of Internet Explorer supports multiple file uploads. IE10 supposedly will, but even IE9 does not. All other major browsers - Chrome, Firefox, Safari and Opera - support multi-file uploads in their latest versions. So how can you handle this? If you need to provide multiple file uploads you can simply add multiple file selection boxes and let people either select multiple files with a single upload file box or use multiples. Alternately you can do some browser detection and if IE is used simply show the extra file upload boxes. It's not ideal, but either one of these approaches makes life easier for folks that use a decent browser and leaves you with a functional interface for those that don't. Here's a UI I recently built as an alternate uploader with multiple file upload buttons: I say this is my 'alternate' uploader - for my primary uploader I continue to use an add-in solution. Specifically I use plUpload and I'll discuss how that's implemented in my next post. Although I think that plUpload (and many of the other packaged JavaScript upload solutions) are a better choice especially for large uploads, for simple one file uploads input boxes work well enough. The advantage of this solution is that it's very easy to handle on the server side. Any of the JavaScript controls require special handling for uploads which I'll also discuss in my next post.© Rick Strahl, West Wind Technologies, 2005-2012Posted in HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Change Tracking

    - by Ricardo Peres
    You may recall my last post on Change Data Control. This time I am going to talk about other option for tracking changes to tables on SQL Server: Change Tracking. The main differences between the two are: Change Tracking works with SQL Server 2008 Express Change Tracking does not require SQL Server Agent to be running Change Tracking does not keep the old values in case of an UPDATE or DELETE Change Data Capture uses an asynchronous process, so there is no overhead on each operation Change Data Capture requires more storage and processing Here's some code that illustrates it's usage: -- for demonstrative purposes, table Post of database Blog only contains two columns, PostId and Title -- enable change tracking for database Blog, for 2 days ALTER DATABASE Blog SET CHANGE_TRACKING = ON (CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON); -- enable change tracking for table Post ALTER TABLE Post ENABLE CHANGE_TRACKING WITH (TRACK_COLUMNS_UPDATED = ON); -- see current records on table Post SELECT * FROM Post SELECT * FROM sys.sysobjects WHERE name = 'Post' SELECT * FROM sys.sysdatabases WHERE name = 'Blog' -- confirm that table Post and database Blog are being change tracked SELECT * FROM sys.change_tracking_tables SELECT * FROM sys.change_tracking_databases -- see current version for table Post SELECT p.PostId, p.Title, c.SYS_CHANGE_VERSION, c.SYS_CHANGE_CONTEXT FROM Post AS p CROSS APPLY CHANGETABLE(VERSION Post, (PostId), (p.PostId)) AS c; -- update post UPDATE Post SET Title = 'First Post Title Changed' WHERE Title = 'First Post Title'; -- see current version for table Post SELECT p.PostId, p.Title, c.SYS_CHANGE_VERSION, c.SYS_CHANGE_CONTEXT FROM Post AS p CROSS APPLY CHANGETABLE(VERSION Post, (PostId), (p.PostId)) AS c; -- see changes since version 0 (initial) SELECT p.Title, c.PostId, SYS_CHANGE_VERSION, SYS_CHANGE_OPERATION, SYS_CHANGE_COLUMNS, SYS_CHANGE_CONTEXT FROM CHANGETABLE(CHANGES Post, 0) AS c LEFT OUTER JOIN Post AS p ON p.PostId = c.PostId; -- is column Title of table Post changed since version 0? SELECT CHANGE_TRACKING_IS_COLUMN_IN_MASK(COLUMNPROPERTY(OBJECT_ID('Post'), 'Title', 'ColumnId'), (SELECT SYS_CHANGE_COLUMNS FROM CHANGETABLE(CHANGES Post, 0) AS c)) -- get current version SELECT CHANGE_TRACKING_CURRENT_VERSION() -- disable change tracking for table Post ALTER TABLE Post DISABLE CHANGE_TRACKING; -- disable change tracking for database Blog ALTER DATABASE Blog SET CHANGE_TRACKING = OFF; You can read about the differences between the two options here. Choose the one that best suits your needs! SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.brushes.Xml.aliases = ['xml']; SyntaxHighlighter.all();

    Read the article

  • Where is the displaytag of scheduled reporting?

    - by Nathan Feger
    So I have been making some reports and using displaytag to output these reports in html, csv, excel, pdf, etc. They are paginated, and take a simple object graph... and output excellent results everytime, with very little code. However, I need to use displaytag or its equivalent outside of a jsp. So that a user can schedule a report to run, and that report is stored in a db, or emailed for later viewing. I have looked at jasper-based reporting solutions, but making a jasper jrxml file is just a nightmare. I know there are gui tools to help, but I'm content with the simple output of displaytable, so I'm happy to give up that control for ease of implementation. Really, if I could take the display:table config out of the jsp I would, so please keep that in mind when proposing a solution. btw, java solutions would be my cup of tea.

    Read the article

  • Selecting by observation in R table

    - by Stedy
    I was working through the Rosetta Code example of the knapsack problem in R and I came up with four solutions. What is the best way to output only one of the solutions based on the observation number give in the table? > values[values$value==max(values$value),] I II III value weight volume 2067 9 0 11 54500 25 25 2119 6 5 11 54500 25 25 2171 3 10 11 54500 25 25 2223 0 15 11 54500 25 25 My initial idea was to save this output to a new variable, then query the new variable such as: > newvariable <- values[values$value==max(values$value),] > newvariable2 <- newvariable[2,] > newvariable2 I II III value weight volume 2119 6 5 11 54500 25 25 This gives me the desired output but is there a cleaner way to do it?

    Read the article

  • JSP Custom Taglib: Nested Evaluation

    - by Tiago Fernandez
    Say I have my custom taglib: <%@ taglib uri="http://foo.bar/mytaglib" prefix="mytaglib"%> <%@ taglib uri="http://java.sun.com/jstl/core" prefix="c"%> <mytaglib:doSomething> Test </mytaglib:doSomething> Inside the taglib class I need to process a template and tell the JSP to re-evaluate its output, so for example if I have this: public class MyTaglib extends SimpleTagSupport { @Override public void doTag() throws JspException, IOException { getJspContext().getOut().println("<c:out value=\"My enclosed tag\"/>"); getJspBody().invoke(null); } } The output I have is: <c:out value="My enclosed tag"/> Test When I actually need to output this: My enclosed tag Test Is this feasible? How? Thanks.

    Read the article

  • Parsing Json Feeds with google Gson

    - by mnml
    I would like to know how to parse a json feed by items, eg. url / title / description for each item. I have had a look to the doc / api but, it didn't help me. This is what I got so far import com.google.gson.Gson; import com.google.gson.JsonObject; public class ImportSources extends Job { public void doJob() throws IOException { String json = stringOfUrl("http://feed.test/all.json"); JsonObject jobj = new Gson().fromJson(json, JsonObject.class); Logger.info(jobj.get("responseData").toString()); } public static String stringOfUrl(String addr) throws IOException { ByteArrayOutputStream output = new ByteArrayOutputStream(); URL url = new URL(addr); IOUtils.copy(url.openStream(), output); return output.toString(); } }

    Read the article

  • Programmatically retrieve disconnected network adapter information in .NET

    - by Soo Wei Tan
    I have an application written in C# that needs to retrieve information like IP address, subnet mask from a disconnected network adapter. I've tried using various methods such as WMI and the .NET NetworkAdapter class but they don't return any useful data when the network adapter is disconnected. I'm pretty sure Windows keeps this information somewhere, since I can apply network settings using netsh and it appears correctly in the Control Panel. One thing that worked for me in XP was to parse the output of the netsh tool and it would return information even for a disconnected adapter. However, this doesn't seem to work on Windows 7. Win XP output: Configuration for interface "Local Area Connection 5" DHCP enabled: No IP Address: 169.254.0.128 SubnetMask: 255.255.255.0 InterfaceMetric: 0 Win7 output: Configuration for interface "Local Area Connection 2" DHCP enabled: No InterfaceMetric: 5 Any ideas?

    Read the article

  • Jquery autocomplete not firing on keyup unless focus changes

    - by TheIG
    I am using a jquery autocomplete and i have a keyup event for my textbox. When I enter in a letter the function is called but the box is not populated. Once I click away from the box and then click back into it the autocomplete works great. Really weird issue and I have no idea how to fix it. Any help would be appreciated. here's my code $(document).ready(function(){ var x; var output; x = document.getElementById('site').value; $.getJSON(url,{field: "name",value: x, comparison: "LIKE"}, function(json){ //code to format output $("#site").autocomplete(output, json); }); }); <input type ="text" size ="40" id="site"></input>

    Read the article

  • Drupal db_query error need help

    - by Gobi
    Hi drupal pals, im using drupal 6.15 and doing my first project in drupal . i got an issue while running the below query with db_query i have drupal,delhi keywords in column 'tag' with table name tagging. db_query(SELECT * FROM {tagging} WHERE tag LIKE '%drup%') wont retrieve the correct output. it show null but the query modified like this, db_query(SELECT * FROM {tagging} WHERE tag LIKE 'drup%') retrieve "drupal" as output finally i used the php core mysql_query mysql_query(SELECT * FROM tagging WHERE tag LIKE '%drup%') it retrieve the exact n correct output "drupal" . is any one have solution , Thanxs, Gobi

    Read the article

  • vim filters and stdout/stderr

    - by ahe
    When I use :%! to run the contents of a file through a filter and the filter fails (it returns another code than 0) and prints an error message to stderr I get my file replaced with this error message. Is there a way to tell vim to skip the filtering if the filter returns an status code that indicates an error and/or ignore output the filter program writes to stderr? There are cases where you want your file to replaced with the output of the filter but most often this behavior is wrong. Of course I can just undo the filtering with one keypress but it isn't optimal. Also I have a similar problem when writing a custom vim script to do the filtering. I have a script that calls a filter program with system() and replaces the file in the buffer with its output but there doesn't seem to be a way to detect if the lines returned by system() where written to stdout or to stderr. Is there a way to tell them apart in vim script?

    Read the article

  • How to overcome vc++ warning C4003 while writing common code for both gcc and vc++

    - by compbugs
    I have a code that is compiled in both gcc and vc++. The code has a common macro which is called in two scenarios. When we pass some parameters to it. When we don't want to pass any parameters to it. An example of such a code is: #define B(X) A1##X int main() { int B(123), B(); return 0; } The expect output from the pre-processing step of compilation is: int main() { int A1123, A1; return 0; } The output for both gcc and vc++ is as expected, but vc++ gives a warning: warning C4003: not enough actual parameters for macro 'B' How can I remove this warning and yet get the expected output? Thanks.

    Read the article

  • Setting up Visual Studio 2010 to step into Microsoft .NET Source Code

    - by rajbk
    Using the Microsoft Symbol Server to obtain symbol debugging information is now much easier in VS 2010. Microsoft gives you access to their internet symbol server that contains symbol files for most of the .NET framework including the recently announced availability of MVC 2 Symbols.  SETUP In VS 2010 RTM, go to Tools –> Options –> Debugging –> General. Check “Enable .NET Framework source stepping” We get the following dialog box   This automatically disables “Enable My Code”   Go to Debugging –> Symbols and Check “Microsoft Symbol Servers”. You can selectively exclude modules if you want to.   You will get a warning dialog like so: Hitting OK will start the download process   The setup is complete. You are now ready to start debugging! DEBUGGING Add a break point to your application and run the application in debug mode (F5 shortcut for me). Go to your call stack when you hit the break point. Right click on a frame that is grayed out. Select “Load Symbols from” “Microsoft Symbol Servers”. VS will begin a one time download of that assembly. This assembly will be cached locally so you don’t have to wait for the download the next time you debug the app.   We get a one time license agreement dialog box You might see an error like the one below regarding different encoding (hopefully will be fixed).    Assemblies for which the symbols have been loaded are no longer grayed out. Double clicking on any entry in the call stack should now directly take you to the source code for that assembly. AFAIK, not all symbols are available on the MS symbol server. In cases like that you will see a tab like the one below and be given the option to “Show Disassembly”. Enjoy! Newsreel Announcer: Humiliated, Muntz vows a return to Paradise Falls and promises to capture the beast alive! Charles Muntz: [speaking to a large audience outside in the newsreel] I promise to capture the beast alive, and I will not come back until I do!

    Read the article

  • XML generation error.

    - by Janis Peisenieks
    I'm trying to generate an XML output with Zend_Framework, but this nasty thing keeps popping up: XML Parsing Error: XML or text declaration not at start of entity Location: http://cart/index/kurpirkt Line Number 2, Column 1:<?xml version="1.0" encoding="utf-8"?> ^ As far as I know there are no white-spaces in any of my include files, and even if there were, I think that the ob_clean() function should have taken care of it. Here is my code: public function kurpirktAction() { ob_clean(); // XML-related routine $xml = new DOMDocument('1.0', 'utf-8'); $xml->appendChild($xml->createElement('foo', 'bar')); $output = $xml->saveXML(); // Both layout and view renderer should be disabled Zend_Controller_Action_HelperBroker::getStaticHelper('viewRenderer')->setNoRender(true); Zend_Layout::getMvcInstance()->disableLayout(); // Setting up headers and body $this->_response->setHeader('Content-Type', 'text/xml; charset=utf-8') ->setBody($output); } Any help or suggestions?

    Read the article

  • Google Maps API Geocode gives error "Invalid label" in Firefox

    - by Bennystijn
    Today I struggled with the following: $.ajax({url:'http://maps.google.com/maps/api/geocode/jsonaddress=Karachi&sensor=false&output=json&callback=?', dataType: 'json', success: function(data){ //eval("("+data+")"); alert(data); } }); Firefox gives the error "Invalid Label" and Chrome "Uncaught SyntaxError: Unexpected token :". I found a lot of posts about this, and I tried all kinds of things like eval(), but also: $.getJSON('http://maps.google.com/maps/api/geocode/jsonaddress=Karachi&sensor=false&output=json&callback=?', function(data){ //eval("("+data+")"); alert(data); } ); Same result. Also, other json data works fine, for instance flickr ("http://api.flickr.com/services/feeds/photos_public.gne?tags=cat&tagmode=any&format=json&jsoncallback=?). So it has something to do with the Google Maps API output i guess.. Thanks in advance.

    Read the article

  • confused about python decorators

    - by nbv4
    I have a class that has an output() method which returns a matplotlib Figure instance. I have a decorator I wrote that takes that fig instance and turns it into a Django response object. My decorator looks like this: class plot_svg(object): def __init__(self, view): self.view = view def __call__(self, *args, **kwargs): print args, kwargs fig = self.view(*args, **kwargs) canvas=FigureCanvas(fig) response=HttpResponse(content_type='image/svg+xml') canvas.print_svg(response) return response and this is how it was being used: def as_avg(self): return plot_svg(self.output)() The only reason I has it that way instead of using the "@" syntax is because when I do it with the "@": @plot_svg def as_svg(self): return self.output() I get this error: as_svg() takes exactly 1 argument (0 given) I'm trying to 'fix' this by putting it in the "@" syntax but I can't figure out how to get it working. I'm thinking it has something to do with self not getting passed where it's supposed to...

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >