Search Results

Search found 10686 results on 428 pages for 'enterprise architect daily notes'.

Page 381/428 | < Previous Page | 377 378 379 380 381 382 383 384 385 386 387 388  | Next Page >

  • Is this a bad version of the Merge Sort algorithm?

    - by SebKom
    merge1(int low, int high, int S[], U[]) { int k = (high - low + 1)/2 for q (from low to high) U[q] = S[q] int j = low int p = low int i = low + k while (j <= low + k - 1) and (i <= high) do { if ( U[j] <= U[i] ) { S[p] := U[j] j := j+1 } else { S[p] := U[i] i := i+1 } p := p+1 } if (j <= low + k - 1) { for q from p to high do { S[q] := U[j] j := j+1 } } } merge_sort1(int low, int high, int S[], U[]) { if low < high { int k := (high - low + 1)/2 merge_sort1(low, low+k-1, S, U) merge_sort1(low+k, high, S, U) merge1(low, high, S, U) } } I am really sorry for the terrible formating, as you can tell I am not a regular visitor here. So, basically, this is on my lecture notes. I find it quite confusing in general but I understand the biggest part of it. What I don't understand is the need of the "if (j <= low + k - 1)" part. It looks like it checks if there are any elements "left" in the left part. Is that even possible when mergesorting?

    Read the article

  • PHP/MySQL time zone migration

    - by El Yobo
    I have an application that currently stores timestamps in MySQL DATETIME and TIMESTAMP values. However, the application needs to be able to accept data from users in multiple time zones and show the timestamps in the time zone of other users. As such, this is how I plan to amend the application; I would appreciate any suggestions to improve the approach. Database modifications All TIMESTAMPs will be converted to DATETIME values; this is to ensure consistency in approach and to avoid having MySQL try to do clever things and convert time zones (I want to keep the conversion in PHP, as it involves less modification to the application, and will be more portable when I eventually manage to escape from MySQL). All DATETIME values will be adjusted to convert them to UTC time (currently all in Australian EST) Query modifications All usage of NOW() to be replaced with UTC_TIMESTAMP() in queries, triggers, functions, etc. Application modifications The application must store the time zone and preferred date format (e.g. US vs the rest of the world) All timestamps will be converted according to the user settings before being displayed All input timestamps will be converted to UTC according to the user settings before being input Additional notes Converting formats will be done at the application level for several main reasons The approach to converting time zones varies from DB to DB, so handing it there will be non-portable (and I really hope to be migrating away from MySQL some time in the not-to-distant future). MySQL TIMESTAMPs have limited ranges to the permitted dates (~1970 to ~2038) MySQL TIMESTAMPs have other undesirable attributes, including bizarre auto-update behaviour (if not carefully disabled) and sensitivity to the server zone settings (and I suspect I might screw these up when I migrate to Amazon later in the year). Is there anything that I'm missing here, or does anyone have better suggestions for the approach?

    Read the article

  • Decompiling an old Program

    - by Pedro Laranjeiro
    Hi. I have been asked to update a program written in 1987 in Delphi (I guess). I have no documentation about this program only a few side notes the programmer took that don't make too much sense to make. The cd show this files: Size | Filename - 19956 VP.DTA - 142300 VP.LEX - 404 VP.NDX - 126502 VP.RCS - 131016 VP.SCR - 150067 VP.XEL - 101791 vp.exe Is anyone of this files a database? If so can I access it's data? I tried several code decompilers but they show a message saying it was not a Win32 compatible application. The program run in MS-DOS. Is it possible to obtain the source code? Can I use this code in any way to build a new application? Thanks Update01: I can run the program in MSDOS. The program conjugate verbs and shows an example sentence where the verb can be used. The GUI is a little bit confusing and there is no help menu so I can't see all the capabilities of the program.

    Read the article

  • Get the id of the link and pass it to the jQueryUI dialog widget

    - by Mike Sanchez
    I'm using the dialog widget of jQueryUI. Now, the links are grabbed from an SQL database by using jQuery+AJAX which is the reason why I used "live" $(function() { var $dialog = $('#report') .dialog({ autoOpen: false, resizable: false, modal: true, height: 410, width: 350, draggable: true }) //store reference to placeholders $uid = $('#reportUniqueId'); $('.reportopen').live("click", function (e) { $dialog.dialog('open'); var $uid = $(this).attr('id'); e.preventDefault(); }); }); My question is, how do I pass the id of the link that triggered the dialog widget to the dialog box itself? The link is set up like this: <td align="left" width="12%"> <span id="notes"> [<a href="javascript:void(0)" class="reportopen" id="<?=$reportId;?>">Spam</a>] </span> </td> And the dialogbox is set up like this: <div id="report" title="Inquire now"> HAHAHAHAHA <span id="reportUniqueId"></span> </div> I'd like for the id to be passed and generated in the <span id="reportUniqueId"></span> part of the dialog box. Any idea?

    Read the article

  • hosting a high traffic facebook app (game)

    - by z3cko
    we are currently developing a high traffic facebook application. all the traffic will be within one month, where there are 500.000 to 1.000.000 expected users. after that month, the game is over and we have a winner - so the app will be archived. we are currently planning to develop the application with ruby on rails and searching for hosting options that can deal with the traffic. the problem is not so much the users, but the peak values: we will have around 500.000 requests coming daily within a short timeframe (lets say within 3 minutes in the worst case) we are expecting 500.000 to 1.000.000 users of the application, with peaks at 1:00pm (timezone GMT+1), where most (up to 80% of the users) will send most of the requests. the requests are from 11th of june to 11.july - after that, the app/game is closed/over. we are currently developing an aggressive caching mechanism - currently we are thinking about 2 or 3 small apps/webservices, that will handle the load. the load is distributed as follows: a) main application, cached data (11 screens, 200k each) b) voting: every day until 1:00pm (timezone GMT+1) - every user votes with about 10k data sent, high concurrent peak values! questions: is there any specific application setup that is recommendable? are there any hosting partners that can be recommended? thanks!

    Read the article

  • UnauthorizedAccessException when running desktop application from shared folder

    - by Atara
    I created a desktop application using VS 2008. When I run it locally, all works well. I shared my output folder (WITHOUT allowing network users to change my files) and ran my exe from another Vista computer on our intranet. When running the shared exe, I receive "System.UnauthorizedAccessException" when trying to read a file. How can I give permission to allow reading the file? Should I change the code? Should I grant permission to the application\folder on the Vista computer? how? Notes: I do not use ClickOnce. the application should be distributed using xcopy. My application target framework is ".Net Framework 2.0" On the Vista computer, "controlPanel | UninstallOrChangePrograms" it says it has "Microsoft .Net Framework 3.5 SP1" I also tried to map the folder drive, but got the same errors, only now the fileName is "T:\my.ocx" ' ---------------------------------------------------------------------- ' my code: Dim src As String = mcGlobals.cmcFiles.mcGetFileNameOcx() Dim ioStream As New System.IO.FileStream(src, IO.FileMode.Open) ' ---------------------------------------------------------------------- Public Shared Function mcGetFileNameOcx() As String ' ---------------------------------------------------------------------- Dim dirName As String = Application.StartupPath & "\" Dim sFiles() As String = System.IO.Directory.GetFiles(dirName, "*.ocx") Dim i As Integer For i = 0 To UBound(sFiles) Debug.WriteLine(System.IO.Path.GetFullPath(sFiles(i))) ' if found any - return the first: Return System.IO.Path.GetFullPath(sFiles(i)) Next Return "" End Function ' ---------------------------------------------------------------------- ' The Exception I receive: System.UnauthorizedAccessException: Access to the path '\\computerName\sharedFolderName\my.ocx' is denied. at System.IO._Error(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.Init(...) at System.IO.FileStream..ctor(...) at System.IO.FileStream..ctor(String path, FileMode mode) ' ----------------------------------------------------------------------

    Read the article

  • How is this Perl code selecting two different elements from an array?

    - by Mike
    I have inherited some code from a guy whose favorite past time was to shorten every line to its absolute minimum (and sometimes only to make it look cool). His code is hard to understand but I managed to understand (and rewrite) most of it. Now I have stumbled on a piece of code which, no matter how hard I try, I cannot understand. my @heads = grep {s/\.txt$//} OSA::Fast::IO::Ls->ls($SysKey,'fo','osr/tiparlo',qr{^\d+\.txt$}) || (); my @selected_heads = (); for my $i (0..1) { $selected_heads[$i] = int rand scalar @heads; for my $j (0..@heads-1) { last if (!grep $j eq $_, @selected_heads[0..$i-1]); $selected_heads[$i] = ($selected_heads[$i] + 1) % @heads; #WTF? } my $head_nr = sprintf "%04d", $i; OSA::Fast::IO::Cp->cp($SysKey,'',"osr/tiparlo/$heads[$selected_heads[$i]].txt","$recdir/heads/$head_nr.txt"); OSA::Fast::IO::Cp->cp($SysKey,'',"osr/tiparlo/$heads[$selected_heads[$i]].cache","$recdir/heads/$head_nr.cache"); } From what I can understand, this is supposed to be some kind of randomizer, but I never saw a more complex way to achieve randomness. Or are my assumptions wrong? At least, that's what this code is supposed to do. Select 2 random files and copy them. === NOTES === The OSA Framework is a Framework of our own. They are named after their UNIX counterparts and do some basic testing so that the application does not need to bother with that.

    Read the article

  • What different terms mean the same thing (or don't, but people think they do)?

    - by Matthew Jones
    One of the pitfalls I run into on a daily basis is customers saying one thing while meaning another. Usually, this is just due to a miscommunication somewhere, but occasionally they are, in fact, saying the same thing I am just using a different term. For example, one of my customers the other day mentioned a feature he called, "find as you type." Being a little confused, I asked him what he meant, and he described the feature in Google where, once you start typing a search query, Google suggests other, popular queries that match the letters you have typed. Click! He meant AutoComplete! He was not wrong, it is just that I had never heard that term before. In the spirit of reducing confusion, what terms can you think of that are different but mean, essentially, the same thing? Also, what terms do people think mean the same thing, but don't. Please differentiate between the two. Please only one set of terms per answer, so we can vote on the best ones.

    Read the article

  • How do you work on Strategic Development initiatives when Tactical work takes priority?

    - by Shaun F
    My day-to-day job consists of maintaining large volume websites and this has given me exposure to developing better methods to develop and maintain the code. This has also given me a large body of knowledge in the code base in terms of troubleshooting that is beneficial to the company. I'm also the maintainer of an IDE plug in I created to help navigate and generate code that is used. Operationally though, my job is to handle any client requests that come in of that are emergencies and make any enhancements and additions to the code base required. This work, along with the daily managing and feeding of the the project managers will take up my entire day. How does one manage the time between the tactical day job and the strategic initiatives? How does one get and ask for recognition for taking strategic initiatives? Is the 8-9 hour day just not going to cut it? Is there even a job out there for programmers to develop strategic initiatives and solutions for a company? I want to also point out that this isn't a problem with the company at all. I think this is more of a personal-improvement decision. Nobody will say no to the improvements at all. I believe in making the things happen but I don't think I'm going to get time from the company to do it...

    Read the article

  • Dealing with infinite loops when constructing states for LR(1) parsing

    - by Bruce
    I'm currently constructing LR(1) states from the following grammar. S->AS S->c A->aA A->b where A,S are nonterminals and a,b,c are terminals. This is the construction of I0 I0: S' -> .S, epsilon --------------- S -> .AS, epsilon S -> .c, epsilon --------------- S -> .AS, a S -> .c, c A -> .aA, a A -> .b, b And I1. From S, I1: S' -> S., epsilon //DONE And so on. But when I get to constructing I4... From a, I4: A -> a.A, a ----------- A -> .aA, a A -> .b, b The problem is A - .aA When I attempt to construct the next state from a, I'm going to once again get the exact same content of I4, and this continues infinitely. A similar loop occurs with S -> .AS So, what am I doing wrong? There has to be some detail that I'm missing, but I've browsed my notes and my book and either can't find or just don't understand what's wrong here. Any help?

    Read the article

  • Data historian queries

    - by Scott Dennis
    Hi, I have a table that contains data for electric motors the format is: DATE(DateTime) | TagName(VarChar(50) | Val(Float) | 2009-11-03 17:44:13.000 | Motor_1 | 123.45 2009-11-04 17:44:13.000 | Motor_1 | 124.45 2009-11-05 17:44:13.000 | Motor_1 | 125.45 2009-11-03 17:44:13.000 | Motor_2 | 223.45 2009-11-04 17:44:13.000 | Motor_2 | 224.45 Data for each motor is inserted daily, so there would be 31 Motor_1s and 31 Motor_2s etc. We do this so we can trend it on our control system displays. I am using views to extract last months max val and last months min val. Same for this months data. Then I join the two and calculate the difference to get the actual run hours for that month. The "Val" is a nonresetable Accumulation from a PLC(Controller). This is my query for Last months Max Value: SELECT TagName, Val AS Hours FROM dbo.All_Data_From_Last_Mon AS cur WHERE (NOT EXISTS (SELECT TagName, Val FROM dbo.All_Data_From_Last_Mon AS high WHERE (TagName = cur.TagName) AND (Val > cur.Val))) This is my query for Last months Max Value: SELECT TagName, Val AS Hours FROM dbo.All_Data_From_Last_Mon AS cur WHERE (NOT EXISTS (SELECT TagName, Val FROM dbo.All_Data_From_Last_Mon AS high WHERE (TagName = cur.TagName) AND (Val < cur.Val))) This is the query that calculates the difference and runs a bit slow: SELECT dbo.Motors_Last_Mon_Max.TagName, STR(dbo.Motors_Last_Mon_Max.Hours - dbo.Motors_Last_Mon_Min.Hours, 12, 2) AS Hours FROM dbo.Motors_Last_Mon_Min RIGHT OUTER JOIN dbo.Motors_Last_Mon_Max ON dbo.Motors_Last_Mon_Min.TagName = dbo.Motors_Last_Mon_Max.TagName I know there is a better way. Ultimately I just need last months total and this months total. Any help would be appreciated. Thanks in advance

    Read the article

  • association of more than one model to a listview

    - by Veer
    I have 3 Tables in my database. Each table has 3 fields each, excluding the ID field. out of which 2 fields are of type nvarchar. None of the tables are related. My ListView in the application helps the user to search my database, the search being incremental. The search includes the nvarchar fields of the 3 tables ie, 6 fields in total. Eg: PhoneBook: Name, PhoneNo Notes: Title, Content Bookmarks: Name, url I've the models generated for the 3 tables. Now the ListBox should display the Ph.Name, Title and the Bo.Name fields. ie, It should be bound to them. But they are from different models. I also should be able to perform CRUD operation on the item searched. How would i do that? P.S: Separate ViewModels are created for each Model which are used for their respective views for handling those tables individually. But this is an integrated view where the user should be able to search everything. Also please somebody suggest me a better Title for this question:)

    Read the article

  • Fastest way to become a MySQL expert?

    - by Kerry
    I have been using MySQL for years, mainly on smaller projects until the last year or so. I'm not sure if it's the nature of the language or my lack of real tutorials that gives me the feeling of being unsure if what I'm writing is the proper way for optimization purposes and scaling purposes. While self-taught in PHP I'm very sure of myself and the code I write, easily can compare it to others and so on. With MySQL, I'm not sure whether (and in what cases) an INNER JOIN or LEFT JOIN should be used, nor am I aware of the large amount of functionality that it has. While I've written code for databases that handled tens of millions of records, I don't know if it's optimum. I often find that a small tweak will make a query take less than 1/10 of the original time... but how do I know that my current query isn't also slow? I would like to become completely confident in this field in the ability to optimize databases and be scalable. Use is not a problem -- I use it on a daily basis in a number of different ways. So, the question is, what's the path? Reading a book? Website/tutorials? Recommendations?

    Read the article

  • Typical SVN repo structure seems to be sub-optimal for continuous integration...

    - by Dave
    I've set up our SVN repository like the Subversion book suggests, and this is also how my previous companies have done it. It looks something like this: /trunk /branches /tags /extlibs /docs where the first three are pretty obvious, and extlibs is for 3rd party assemblies that we wouldn't typically recompile ourselves. All of this works great for the daily development stuff. Now I've installed TeamCity and have builds, unit tests, code coverage, and code analysis running. Everything is great, except for the fact that this code structure results in too much code getting downloaded. So here's the catch 22, in my opinion: it's silly to download all of aforementioned folders from the SVN repo when I only need /trunk and /extlibs. But I can only specify one repo folder to download in the TeamCity VCS settings. So then the other possibility is to put the /extlibs folder into /trunk, but in order to compile branches, /extlibs would have to go into all of those as well (since I usually branch the trunk, and not individual subfolders... and this would seem infinitely more evil since /extlibs could actually be larger than /trunk and /branches, with all of the binaries stored there... Do you guys have any suggestions for me? Thanks!

    Read the article

  • Snapshot agent obliterates conflicts

    - by mwolfe02
    We are using merge replication in SQL Server 2000. We have a snapshot agent that runs every night that updates the publication snapshot. About six months ago we updated from SQL Server 7.0 to 2000 (that's not a typo). We noticed a sharp decline in conflicts at that time but could not track down the reason. We finally found that the daily snapshot agent is recreating the conflict tables every night. This seems to be a change in functionality from SQL Server 7.0. We were running the snapshot agent before and the conflicts would accumulate. Is there some way to prevent the data in the conflict tables from being lost when the snapshot runs? Can anyone confirm a change in behavior between 7.0 and 2000? Our current plan is to simply stop automatically updating the publication snapshot. Is that a reasonable workaround? Here is the line from the script that is adding the snapshot: exec sp_addpublication_snapshot @publication = N'MyPub' , @frequency_type = 4 , @frequency_interval = 1 , @frequency_relative_interval = 1 , @frequency_recurrence_factor = 0 , @frequency_subday = 1 , @frequency_subday_interval = 5 , @active_start_date = 0 , @active_end_date = 0 , @active_start_time_of_day = 500 , @active_end_time_of_day = 235959 Here is the step that runs in the agent job: Step Name: Run agent. Type: Replication Snapshot Command: -Publisher [WCDBS02] -PublisherDB [TaxDB] -Distributor [WCDBS02] -Publication [TaxDB] -ReplicationType 2 -DistributorSecurityMode 1 This appears to be running the Replication Snapshot Agent Utility. There is no mention on that link about dropping and recreating system conflict tables, nor is there any flag that can be set to alter this behavior.

    Read the article

  • In SQL Server what is most efficient way to compare records to other records for duplicates with in

    - by Glenn
    We have an SQL Server that gets daily imports of data files from clients. This data is interrelated and we are always scrubbing it and having to look for suspect duplicate records between these files. Finding and tagging suspect records can get pretty complicated. We use logic that requires some field values to be the same, allows some field values to differ, and allows a range to be specified for how different certain field values can be. The only way we've found to do it is by using a cursor based process, and it places a heavy burden on the database. So I wanted to ask if there's a more efficient way to do this. I've heard it said that there's almost always a more efficient way to replace cursors with clever JOINS. But I have to admit I'm having a lot of trouble with this one. For a concrete example suppose we have 1 table, an "orders" table, with the following 6 fields. order_id, customer_id product_id, quantity, sale_date, price We want to look through the records to find suspect duplicates on the following example criteria. These get increasingly harder. 1. Records that have the same product_id, sale_date, and quantity but different customer_id's should be marked as suspect duplicates for review. 2. Records that have the same customer_id, product_id, quantity and have sale_dates within five days of each other should be marked as suspect duplicates for review 3. Records that have the same customer_id, product_id, but different quantities within 20 units, and sales dates within five days of each other should be considered suspect. Is it possible to satisfy each one of these criteria with a single SQL Query that uses JOINS? Is this the most efficient way to do this?

    Read the article

  • reporting tool/viewer for large datasets

    - by FrustratedWithFormsDesigner
    I have a data processing system that generates very large reports on the data it processes. By "large" I mean that a "small" execution of this system produces about 30 MB of reporting data when dumped into a CSV file and a large dataset is about 130-150 MB (I'm sure someone out there has a bigger idea of "large" but that's not the point... ;) Excel has the ideal interface for the report consumers in the form of its Data Lists: users can filter and segment the data on-the-fly to see the specific details that they are interested in - they can also add notes and markup to the reports, create charts, graphs, etc... They know how to do all this and it's much easier to let them do it if we just give them the data. Excel was great for the small test datasets, but it cannot handle these large ones. Does anyone know of a tool that can provide a similar interface as Excel data lists, but that can handle much larger files? The next tool I tried was MS Access, and found that the Access file bloats hugely (30 MB input file leads to about 70 MB Access file, and when I open the file, run a report and close it the file's at 120-150 MB!), the import process is slow and very manual (currently, the CSV files are created by the same plsql script that runs the main process so there's next to no intervention on my part). I also tried an Access database with linked tables to the database tables that store the report data and that was many times slower (for some reason, sqlplus could query and generate the report file in a minute or soe while Access would take anywhere from 2-5 minutes for the same data) (If it helps, the data processing system is written in PL/SQL and runs on Oracle 10g.)

    Read the article

  • Is there any reason why jQuery Sortable would work in IE/Chrome but not Firefox?

    - by DNS
    I have a fairly straightforward list of horizontally floated items, like this: <div class="my-widget-container"> <div class="my-widget-column">...</div> ... </div> Both the container and each column have a fixed width, set using jQuery's .width(). The container is position: relative and the column is float: left and overflow: hidden. Not sure if any other styles/properties are relevant. When I apply a jQuery-UI sortable to this, the result is exactly what I'd expect in Chome 8 and IE 8; the columns can be dragged around to change their order. But in Firefox 3.6 I can click an item and drag to create a new sort-helper, yet the actual sort never happens; the real item's position in the DOM never changes. I dug around a little in Sortable, and added a debug print to _intersectsWithPointer. Whenever the drag helper moves, Sortable runs through its list of elements and uses this method to determine whether the drag helper has passed over one. What I saw was that item.left had the same value for all my columns, which is obviously not correct, and probably the source of the problem. It looks like all columns had a left position corresponding to that of the first column. I'm using jQuery 1.4.3 and jQuery UI Sortable 1.8. Those aren't the very latest versions, but they're pretty recent, and I don't see anything in the Sortable release notes that indicates any such problem has been fixed. Does anyone know what might be happening here, or have any ideas for further debugging?

    Read the article

  • When is a try catch not a try catch?

    - by Dearmash
    I have a fun issue where during application shutdown, try / catch blocks are being seemingly ignored in the stack. I don't have a working test project (yet due to deadline, otherwise I'd totally try to repro this), but consider the following code snippet. public static string RunAndPossiblyThrow(int index, bool doThrow) { try { return Run(index); } catch(ApplicationException e) { if(doThrow) throw; } return ""; } public static string Run(int index) { if(_store.Contains(index)) return _store[index]; throw new ApplicationException("index not found"); } public static string RunAndIgnoreThrow(int index) { try { return Run(index); } catch(ApplicationException e) { } return ""; } During runtime this pattern works famously. We get legacy support for code that relies on exceptions for program control (bad) and we get to move forward and slowly remove exceptions used for program control. However, when shutting down our UI, we see an exception thrown from "Run" even though "doThrow" is false for ALL current uses of "RunAndPossiblyThrow". I've even gone so far as to verify this by modifying code to look like "RunAndIgnoreThrow" and I'll still get a crash post UI shutdown. Mr. Eric Lippert, I read your blog daily, I'd sure love to hear it's some known bug and I'm not going crazy. EDIT This is multi-threaded, and I've verified all objects are not modified while being accessed

    Read the article

  • How much is too much memory allocation in NDK?

    - by Maximus
    The NDK download page notes that, "Typical good candidates for the NDK are self-contained, CPU-intensive operations that don't allocate much memory, such as signal processing, physics simulation, and so on." I came from a C background and was excited to try to use the NDK to operate most of my OpenGL ES functions and any native functions related to physics, animation of vertices, etc... I'm finding that I'm relying quite a bit on Native code and wondering if I may be making some mistakes. I've had no trouble with testing at this point, but I'm curious if I may run into problems in the future. For example, I have game struct defined (somewhat like is seen in the San-Angeles example). I'm loading vertex information for objects dynamically (just what is needed for an active game area) so there's quite a bit of memory allocation happening for vertices, normals, texture coordinates, indices and texture graphic data... just to name the essentials. I'm quite careful about freeing what is allocated between game areas. Would I be safer setting some caps on array sizes or should I charge bravely forward as I'm going now?

    Read the article

  • XML File as Excel file.

    - by FrustratedWithFormsDesigner
    I have a number of reports that I run against my database that need to eventually go to the end-users as Excel spreadsheets. Initially, I was creating text reports, but the steps to convert the text to a spreadsheet were a bit cumbersome. There were too many steps to import text to the spreadsheet, and multi-line text rows were imported as individual rows in Excel (which was incorrect). Currently, I am generating simple XML saving the file with an ".xls" extension. This works better, but there is still the problem of Excel prompting the user with an XML import dialogue every time they open the file, and then having to save a new file if they add notes or change the layout to the file (which they almost certainly will be doing). Sample "xls" file: <?xml version="1.0" standalone="yes"?> <report_rows> <row> <NAME>Test Data</NAME> <COUNT>345</COUNT> </row> <!-- many more row elements... --> </report_rows> Is there any way to add markup to the file to hint to Excel how it should import and handle the file? Ideally, the end user should be able to open and save the file like any othe spreadsheet they create directly from Excel. Is this even possible? UPDATE: We are running Office 2003 here. UPDATE: The XML is generated from a sqlplus script, no option to use C#/.NET here.

    Read the article

  • Can isdigit legitimately be locale dependent in C

    - by cdev
    In the section covering setlocale, the ANSI C standard states in a footnote that the only ctype.h functions whose behaviour is not affected by the current locale are isdigit and isxdigit. The Microsoft implementation of isdigit is locale dependent because, for example, in locales using code page 1250 isdigit only returns non-zero for characters in the range 0x30 ('0') - 0x39 ('9'), whereas in locales using code page 1252 isdigit also returns non-zero for the superscript digits 0xB2 ('²'), 0xB3 ('³') and 0xB9 ('¹'). Is Microsoft in violation of the C standard by making isdigit locale dependent? In this question I am primarily interested in C90, which Microsoft claims to conform to, rather than C99. Additional background: Microsoft's own documentation of setlocale incorrectly states that isdigit is unaffected by the LC_CTYPE part of the locale. The section of the C standard that covers the ctype.h functions contains some wording that I consider ambiguous: "The behavior of these functions is affected by the current locale. Those functions that have locale-specific aspects only when not in the "C" locale are noted below." I consider this ambiguous because it is unclear what it is trying to say about functions such as isdigit for which there are no notes about locale-specific aspects. It might be trying to say that such functions must be assumed to be locale dependent, in which case Microsoft's implementation of isdigit would be OK. (Except that the footnote I mentioned earlier seems to contradict this interpretation.)

    Read the article

  • Using Lucene to index private data, should I have a separate index for each user or a single index

    - by Nathan Bayles
    I am developing an Azure based website and I want to provide search capabilities using Lucene. (structured json objects would be indexed and stored in Lucene and other content such as Word documents, etc. would be indexed in lucene but stored in blob storage) I want the search to be secure, such that one user would never see a document belonging to another user. I want to allow ad-hoc searches as typed by the user. Lastly, I want to query programmatically to return predefined sets of data, such as "all notes for user X". I think I understand how to add properties to each document to achieve these 3 objectives. (I am listing them here so if anyone is kind enough to answer, they will have better idea of what I am trying to do) My questions revolve around performance and security. Can I improve document security by having a separate index for each user, or is including the user's ID as a parameter in each search sufficient? Can I improve indexing speed and total throughput of the system by having a separate index for each user? My thinking is that having separate indexes would allow me to scale the system by having multiple index writers (perhaps even on different server instances) working at the same time, each on their own index. Any insight would be greatly appreciated. Regards, Nate

    Read the article

  • Save object in CoreData

    - by John
    I am using CoreData with iPhone SDK. I am making a notes app. I have a table with note objects displayed from my model. When a button is pressed I want to save the text in the textview to the object being edited. How do I do this? I've been trying several things but none seem to work. Thanks EDIT: NSManagedObjectContext *context = [fetchedResultsController managedObjectContext]; NSEntityDescription *entity = [[fetchedResultsController fetchRequest] entity]; NSManagedObject *newManagedObject = [NSEntityDescription insertNewObjectForEntityForName:[entity name] inManagedObjectContext:context]; [newManagedObject setValue:detailViewController.textView.text forKey:@"noteText"]; NSError *error; if (![context save:&error]) { /* Replace this implementation with code to handle the error appropriately. abort() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development. If it is not possible to recover from the error, display an alert panel that instructs the user to quit the application by pressing the Home button. */ NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } The above code saves it correctly but it saves it as a new object. I want it to be saved as the one I have selected in my tableView.

    Read the article

  • Android Java writing text file to sd card

    - by Paul
    I have a strange problem I've come across. My app can write a simple textfile to SD card and sometimes it works for some people but not for others and I have no idea why. Some people it force closes if they put some characters like "..." in it and such. I cannot seem to reproduce it as I've had no troubles but this is the code that handles it. Can anyone think of something that may lead to problems or a better to way to do it? public void generateNoteOnSD(String sFileName, String sBody){ try { File root = new File(Environment.getExternalStorageDirectory(), "Notes"); if (!root.exists()) { root.mkdirs(); } File gpxfile = new File(root, sFileName); FileWriter writer = new FileWriter(gpxfile); writer.append(sBody); writer.flush(); writer.close(); Toast.makeText(this, "Saved", Toast.LENGTH_SHORT).show(); } catch(IOException e) { e.printStackTrace(); importError = e.getMessage(); iError(); } }

    Read the article

< Previous Page | 377 378 379 380 381 382 383 384 385 386 387 388  | Next Page >