Search Results

Search found 12793 results on 512 pages for 'format specifiers'.

Page 420/512 | < Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >

  • Unable to change the system zone setting on Windows Server 2008 R2.

    - by Ganesh
    Hi All, I have an MFC application that tries to change the system zone setting on the Windows Server 2008 R2. I am using the SetTimeZoneInformation() API which fails with the error code 1314 .i.e. “A required privilege is not held by the client.”. Please refer the sample code below: TIME_ZONE_INFORMATION l_TimeZoneInfo; DWORD l_dwRetVal = 0; ZeroMemory(&l_TimeZoneInfo, sizeof(TIME_ZONE_INFORMATION)); l_TimeZoneInfo.Bias = -330; l_TimeZoneInfo.StandardBias = 0; l_TimeZoneInfo.StandardDate.wDay = 0; l_TimeZoneInfo.StandardDate.wDayOfWeek = 0; l_TimeZoneInfo.StandardDate.wHour = 0; l_TimeZoneInfo.StandardDate.wMilliseconds = 0; l_TimeZoneInfo.StandardDate.wMinute = 0; l_TimeZoneInfo.StandardDate.wMonth = 0; l_TimeZoneInfo.StandardDate.wSecond = 0; l_TimeZoneInfo.StandardDate.wYear = 0; CString l_csDaylightName = _T("India Daylight Time"); CString l_csStdName = _T("India Standard Time"); wcscpy(l_TimeZoneInfo.DaylightName,l_csDaylightName.GetBuffer(l_csDaylightName.GetLength())); wcscpy(l_TimeZoneInfo.StandardName,l_csStdName.GetBuffer(l_csStdName.GetLength())); ::SetLastError(0); if(0 == ::SetTimeZoneInformation(&l_TimeZoneInfo)) { l_dwRetVal = ::GetLastError(); CString l_csErr = _T(""); l_csErr.Format(_T("%d"),l_dwRetVal); } The MFC application has been developed using Visual Studio 2008 and is UAC aware i.e. the application has UAC enabled in its manifest file with the UAC execution level set to “HighestAvailable”. I have administrator privileges and when I run the application it still fails to change the system zone setting. Thanks in Advance, Ganesh

    Read the article

  • Windows Messages Bizarreness

    - by jameszhao00
    Probably just a gross oversight of some sort, but I'm not receiving any WM_SIZE messages in the message loop. However, I do receive them in the WndProc. I thought the windows loop gave messages out to WndProc? LRESULT CALLBACK WndProc( HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam ) { switch(message) { // this message is read when the window is closed case WM_DESTROY: { // close the application entirely PostQuitMessage(0); return 0; } break; case WM_SIZE: return 0; break; } printf("wndproc - %i\n", message); // Handle any messages the switch statement didn't return DefWindowProc (hWnd, message, wParam, lParam); } ... and now the message loop... while(TRUE) { // Check to see if any messages are waiting in the queue if(PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { // translate keystroke messages into the right format TranslateMessage(&msg); // send the message to the WindowProc function DispatchMessage(&msg); // check to see if it's time to quit if(msg.message == WM_QUIT) { break; } if(msg.message == WM_SIZING) { printf("loop - resizing...\n"); } } else { //do other stuff } }

    Read the article

  • How to run White + SL4 UATs through TeamCity?

    - by Duncan Bayne
    After experiencing a series of unpleasant issues with TFS, including source code orruption and project management inflexibility, we (meaning the project team of which I'm a part) have decided to move from TFS 2010 to TeamCity + SVN + V1. I've managed to get our MSTest component and unit tests running as part of every build. However, our UATs are failing, and I was hoping for some advice from the TeamCity community as to best practices w.r.t. running web servers and interacting with the desktop. Each of our UAT fixtures starts a web server to host the site, like this: public static void StartWebServer() { var pathToSite = @"C:\projects\myproject\FrontEnd\MyProject.FrontEnd.Web"; var webServer = new Process { StartInfo = new ProcessStartInfo { Arguments = string.Format("/port:9150 /path:\"{0}\"", pathToSite), FileName = @"C:\Program Files (x86)\Common Files\microsoft shared\DevServer\10.0\WebDev.WebServer40.EXE" } }; webServer.Start(); } Needless to say, this doesn't work when running through TeamCity, as the pathToSite value is different each time. I'm hoping there is a way of determining the path into which the the code is checked out prior to building? That would allow me to point the web server at the right place. The other issue is that our UATs use White to drive the Silverlight UI through an instance of Internet Explorer: _browserWindow = InternetExplorer.Launch("http://localhost:9150/index.html#/Home", "Home - Windows Internet Explorer"); _document = _browserWindow.SilverlightDocument; I've ensured that the TeamCity service is granted the ability to interact with the desktop, and I've set the build agent machine up to log in automatically (an open session is a pre-requisite for White to work properly). Is that all I need to do or are there additional steps required?

    Read the article

  • Margin on the top in LaTex

    - by Tim
    I am now writing a thesis which is required to have 1 and half inches on the left or binding side, and 1 inch on the other three sides. I just have my print-out checked in the binding office and I was told the margins are not okay, especially the one on the top is not enough, a little less than 1 inch. See the figure below: I wonder which commands are responsible for the margins on the four sides, especially for the one on the top? I provide links to two files I believe that control the layout and format of the thesis:jhu12.clo and thesis.cls. Or instead of modifying the LaTex files, is thre some setting when printing the pdf file to control these margins on the print-out? Thanks and regards! EDIT: This is the print-out of command \layout It shows "one inch + \voffset" for the top (the NO. 2 item). But the staff at the binding office use his ruler to show it is less than 1 inch. How to adjust the margins in LaTex then? Thanks!

    Read the article

  • Unknown date shown for mails received from Mantis in Webmail

    - by Timw
    Hello, Im using Mantis bug tracker v1.1.8, and the Horde Webmail System for my company emails. When emails sent by Mantis arrive in my company inbox, i get Unknown Date in the Date field of the inbox view. When i open the message, i see the Date like Thu, 31 Dec 2009 14:32:15 +0580. The other mails, whose date i can see in the Inbox view have date in a format like Mon, 21 Dec 2009 06:56:18 +0100 [12/21/2009 11:26:18 AM IST] . For your reference i have pasted below the contents of my config_inc.php <?php $g_hostname = 'localhost'; $g_db_type = 'mysql'; $g_database_name = 'bugtracker_mantis'; $g_db_username = 'root'; $g_db_password = ''; # select the method to mail by: # 0 - mail() # 1 - sendmail # 2 - SMTP $g_phpMailer_method = 2; # This option allows you to use a remote SMTP host. Must use the phpMailer script # Name of smtp host, needed for phpMailer, taken from php.ini $g_smtp_host = <my_smtp_host>; $g_administrator_email = <my_administrator_email>; $g_webmaster_email = <my_webmaster_email>; $g_from_email = <my_from_email>; putenv("TZ=Asia/Calcutta"); #Date Settings $g_default_language = 'english'; $g_short_date_format = 'dm-Y'; $g_normal_date_format = 'dmY H: i'; $g_complete_date_format = 'm-d-y H:i T'; ?> Any way to fix this problem ? Thank You

    Read the article

  • Computing complex math equations in python

    - by dassouki
    Are there any libraries or techniques that simplify computing equations ? Take the following two examples: F = B * { [ a * b * sumOf (A / B ''' for all i ''' ) ] / [ sumOf(c * d * j) ] } where: F = cost from i to j B, a, b, c, d, j are all vectors in the format [ [zone_i, zone_j, cost_of_i_to_j], [..]] This should produce a vector F [ [1,2, F_1_2], ..., [i,j, F_i_j] ] T_ij = [ P_i * A_i * F_i_j] / [ SumOf [ Aj * F_i_j ] // j = 1 to j = n ] where: n is the number of zones T = vector [ [1, 2, A_1_2, P_1_2], ..., [i, j, A_i_j, P_i_j] ] F = vector [1, 2, F_1_2], ..., [i, j, F_i_j] so P_i would be the sum of all P_i_j for all j and Aj would be sum of all P_j for all i I'm not sure what I'm looking for, but perhaps a parser for these equations or methods to deal with multiple multiplications and products between vectors? To calculate some of the factors, for example A_j, this is what i use from collections import defaultdict A_j_dict = defaultdict(float) for A_item in TG: A_j_dict[A_item[1]] += A_item[3] Although this works fine, I really feel that it is a brute force / hacking method and unmaintainable in the case we want to add more variables or parameters. Are there any math equation parsers you'd recommend? Side Note: These equations are used to model travel. Currently I use excel to solve a lot of these equations; and I find that process to be daunting. I'd rather move to python where it pulls the data directly from our database (postgres) and outputs the results into the database. All that is figured out. I'm just struggling with evaluating the equations themselves. Thanks :)

    Read the article

  • Cache consistency & spawning a thread

    - by Dave Keck
    Background I've been reading through various books and articles to learn about processor caches, cache consistency, and memory barriers in the context of concurrent execution. So far though, I have been unable to determine whether a common coding practice of mine is safe in the strictest sense. Assumptions The following pseudo-code is executed on a two-processor machine: int sharedVar = 0; myThread() { print(sharedVar); } main() { sharedVar = 1; spawnThread(myThread); sleep(-1); } main() executes on processor 1 (P1), while myThread() executes on P2. Initially, sharedVar exists in the caches of both P1 and P2 with the initial value of 0 (due to some "warm-up code" that isn't shown above.) Question Strictly speaking – preferably without assuming any particular CPU – is myThread() guaranteed to print 1? With my newfound knowledge of processor caches, it seems entirely possible that at the time of the print() statement, P2 may not have received the invalidation request for sharedVar caused by P1's assignment in main(). Therefore, it seems possible that myThread() could print 0. References These are the related articles and books I've been reading. (It wouldn't allow me to format these as links because I'm a new user - sorry.) Shared Memory Consistency Models: A Tutorial hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf Memory Barriers: a Hardware View for Software Hackers rdrop.com/users/paulmck/scalability/paper/whymb.2009.04.05a.pdf Linux Kernel Memory Barriers kernel.org/doc/Documentation/memory-barriers.txt Computer Architecture: A Quantitative Approach amazon.com/Computer-Architecture-Quantitative-Approach-4th/dp/0123704901/ref=dp_ob_title_bk

    Read the article

  • Asp.Net tree view in SharePoint webpart- Input string error

    - by Faiz
    Hi All, I am facing a very strange issue. I have a SharePoint webpart that displays an asp.net tree view. It takes tree depth from a drop down. To improve performance of the tree view, i am setting the PopulateOnDemand property to true for the last level of the tree depth. For example, if i have a total of 10 levels in the data and the user selects tree depth as 3, then the third level data i set PopulateOnDemand to true. Now comes the strange part. When i click on the + image on the third level, and if there are children under that particular node then call back happens and node gets expanded. But if there no children for that particular node, then click + throws "Input string was not in the correct format" error. I have made sure that there is no server side error. Some things looks to be fishy when internet explorer is trying to bind construct the expanded node. Please let me know if any one faced similar issue or the resolution for the same? Thanks in advance

    Read the article

  • Generating JavaScript in C# and subsequent testing

    - by Codebrain
    We are currently developing an ASP.NET MVC application which makes heavy use of attribute-based metadata to drive the generation of JavaScript. Below is a sample of the type of methods we are writing: function string GetJavascript<T>(string javascriptPresentationFunctionName, string inputId, T model) { return @"function updateFormInputs(value){ $('#" + inputId + @"_SelectedItemState').val(value); $('#" + inputId + @"_Presentation').val(value); } function clearInputs(){ " + helper.ClearHiddenInputs<T>(model) + @" updateFormInputs(''); } function handleJson(json){ clearInputs(); " + helper.UpdateHiddenInputsWithJson<T>("json", model) + @" updateFormInputs(" + javascriptPresentationFunctionName + @"()); " + model.GetCallBackFunctionForJavascript("json") + @" }"; } This method generates some boilerplace and hands off to various other methods which return strings. The whole lot is then returned as a string and written to the output. The question(s) I have are: 1) Is there a nicer way to do this other than using large string blocks? We've considered using a StringBuilder or the Response Stream but it seems quite 'noisy'. Using string.format starts to become difficult to comprehend. 2) How would you go about unit testing this code? It seems a little amateur just doing a string comparison looking for particular output in the string. 3) What about actually testing the eventual JavaScript output? Thanks for your input!

    Read the article

  • events is not displayed on my fullcalendar

    - by ChangJiu
    Hi BalusC! I have used your method at above in my servelt. **[CalendarMap]** public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { Map<String, Object> map = new HashMap<String, Object>(); map.put("id", 115); map.put("title", "changjiu"); map.put("start", new SimpleDateFormat("yyyy-MM-15").format(new Date())); map.put("url", "http://yahoo.com/"); // Convert to JSON string. String json = new Gson().toJson(map); // Write JSON string. response.setContentType("application/json"); response.setCharacterEncoding("UTF-8"); response.getWriter().write(json); } I want to display it my fullcalendar as follow. $(document).ready(function() { $('#calendar').fullCalendar({ eventSources: [ "CalendarMap" ] }); }); but it's not worked! Can you help me? thank you!

    Read the article

  • ClassCircularityError when running tomcat6 from eclipse

    - by zenmonkey
    I'm using Eclipse 3.5, tomcat runtime is set as tomcat 6.0.26, vm is jdk 1.6.17 (macosx) When I try to run a web application from eclipse Java EE perspective I keep seeing this error in the console: Caused by: java.lang.ClassCircularityError: java/util/logging/LogRecord at com.adsafe.util.SimpleFormatter.format(SimpleFormatter.java:11) at java.util.logging.StreamHandler.publish(StreamHandler.java:179) at java.util.logging.ConsoleHandler.publish(ConsoleHandler.java:88) at java.util.logging.Logger.log(Logger.java:458) at java.util.logging.Logger.doLog(Logger.java:480) at java.util.logging.Logger.logp(Logger.java:596) at org.apache.juli.logging.DirectJDKLog.log(DirectJDKLog.java:165) at org.apache.juli.logging.DirectJDKLog.info(DirectJDKLog.java:115) at org.apache.catalina.core.ApplicationContext.log(ApplicationContext.java:644) at org.apache.catalina.core.ApplicationContextFacade.log(ApplicationContextFacade.java:251) at org.apache.catalina.core.StandardWrapper.unavailable(StandardWrapper.java:1327) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1130) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:993) at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4187) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4496) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardHost.start(StandardHost.java:785) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443) at org.apache.catalina.core.StandardService.start(StandardService.java:519) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:581) ... 6 more java/util/logging/LogRecord implements Serializable, so i am not sure where the circular reference could have creeped in. Anyone see this before and any anyone know how to fix this?

    Read the article

  • Help with error creating SharePoint list (probably due to privilege issues)

    - by Swami
    I'm getting an error when trying to activate a webpart. It activates fine in one setup , but fails in a different one. Administrator in both. Seems like it fails because it's not able to create the list. The error is: Message: Value cannot be null. Stack Trace: at Microsoft.Sharepoint.SPRoleAssignment..ctor at ClientRequestHandler.CreateList(... private static void CreateLists() { try { SPSecurity.RunWithElevatedPrivileges(delegate() { using (SPSite site = SPContext.Current.Site) { using (SPWeb web = site.RootWeb) { string listName = LIST_NAME; bool listExist = ContainList(web, listName); if (!listExist) { AddFieldDelegate _delegate = new AddFieldDelegate(AddAttachmentFields); SPList list = CreateList(web, listName, _delegate); RegisterList(web, list, KEY); } } } }); } catch (Exception ex) { throw new Exception(String.Format("Message: {0} Stack Trace: {1}", ex.Message, ex.StackTrace.ToString())); } } private static SPList CreateList(SPWeb web, string listName, AddFieldDelegate _delegate) { web.AllowUnsafeUpdates = true; SPListTemplateType genericList = new SPListTemplateType(); genericList = SPListTemplateType.GenericList; Guid listGuid = web.Lists.Add(listName, "List", genericList); SPList list = web.Lists[listGuid]; list.Hidden = true; SPView view = _delegate(list); view.Update(); //Remove permissions from the list list.BreakRoleInheritance(false); //Make site owners the list administrators SPPrincipal principal = web.AssociatedOwnerGroup as SPPrincipal; SPRoleAssignment assignment = new SPRoleAssignment(principal); assignment.RoleDefinitionBindings.Add(web.RoleDefinitions.GetByType(SPRoleType.Administrator)); list.RoleAssignments.Add(assignment); //update list changes list.Update(); return list; }

    Read the article

  • Alternatives to using web.config to store settings (for complex solutions)

    - by Brian MacKay
    In our web applications, we seperate our Data Access Layers out into their own projects. This creates some problems related to settings. Because the DAL will eventually need to be consumed from perhaps more than one application, web.config does not seem like a good place to keep the connection strings and some of the other DAL-related settings. To solve this, on some of our recent projects we introduced a third project just for settings. We put the setting in a system of .Setting files... With a simple wrapper, the ability to have different settings for various enviroments (Dev, QA, Staging, Production, etc) was easy to achieve. The only problem there is that the settings project (including the .Settings class) compiles into an assembly, so you can't change it without doing a build/deployment, and some of our customers want to be able to configure their projects without Visual Studio. So, is there a best practice for this? I have that sense that I'm reinventing the wheel. Some solutions such as storing settings in a fixed directory on the server in, say, our own XML format occurred to us. But again, I would rather avoid having to re-create encryption for sensitive values and so on. And I would rather keep the solution self-contained if possible. EDIT: The original question did not contain the really penetrating reason that we can't (I think) use web.config ... That puts a few (very good) answers out of context, my bad.

    Read the article

  • Can't get data with spaces into the database from Ajax POST request

    - by J Jones
    I have a real simple form with a textbox and a button, and my goal is to have an asynchronous request (jQuery: $.ajax) send the text to the server (PHP/mysql a la Joomla) so that it can be added to a database table. Here's the javascript that is sending the data from the client: var value= $('#myvalue').val(); $.ajax( { type: "POST", url: "/administrator/index.php", data: { option: "com_mycomponent", task: "enterValue", thevalue: value, format: "raw"}, dataType: "text", success: reportSavedValue } ); The problem arises when the user enters text with a space in it. The $_POST variable that I receive has all the spaces stripped out, so that if the user enters "This string has spaces", the server gets the value "Thisstringhasspaces". I have been googling around, and have found lots of references that I need to use encodeURIComponent. So I have tried it, but now the value that I get from $_POST is "This20string20has20spaces". So it appears to be encoding it the way I would expect, only to have the percent signs stripped instead of the spaces, and leaving the hex numbers. I'm really confused. It seems that this sort of question is asked and answered everywhere on the web, and everywhere encodeURIComponent is hailed as the silver bullet. But apparently I'm fighting a different breed of lycanthrope. Does anyone have any suggestions?

    Read the article

  • Creating dynamic jQuery tooltips for dynamic content

    - by Mel
    I'm using the qTip jQuery plugin to create tooltips for a set of links. Two problems: How do I create a set of tooltips for three dynamically generated links where the content of the tooltip will also be dynamic: a href="books.cfm?bookID=11"Book One a href="books.cfm?bookID=22"Book Two a href="books.cfm?bookID=33"Book Three I would like to create a tooltip for each link. Each tooltip will then load details about each book. Thus I must pass the bookID to the tooltip: $('#catalog a[href]').each(function() { $(this).qtip( { content: { URL: 'cfcs/viewbooks.cfc?method=bookDetails', data: { bookID: <cfoutput>#indexView.bookID#</cfoutput> }, method: 'get' } }); }); Unfortunately the above code is not working correctly. I've gotten it to work when I've used a static 'bookID' instead of a dynamically generated number. Even when it does work (by using a static number for 'bookID', I can't format the data correctly. It comes back as a query result, or a bunch of text strings. Should I send back the results as HTML? Unsure. PS: I am an absolute NOVICE to Javascript and jQuery, so please try not to be as technical. Many thanks!

    Read the article

  • singleton factory connection pdo

    - by Scarface
    Hey guys I am having a lot of trouble trying to understand this and I was just wondering if someone could help me with some questions. I found some code that is supposed to create a connection with pdo. The problem I was having was having my connection defined within functions. Someone suggested globals but then pointed to a 'better' solution http://stackoverflow.com/questions/130878/global-or-singleton-for-database-connection. My questions with this code are. PS I cannot format this code on this page so see the link if you cannot read What is the point of the connection factory? What goes inside new ConnectionFactory(...) When the connection is defined $db = new PDO(...); why is there no try or catch (I use those for error handling)? Does this then mean I have to use try and catch for every subsequent query? class ConnectionFactory { private static $factory; public static function getFactory() { if (!self::$factory) self::$factory = new ConnectionFactory(...); return self::$factory; } private $db; public function getConnection() { if (!$db) $db = new PDO(...); return $db; } } function getSomething() { $conn = ConnectionFactory::getFactory()-getConnection(); . . . }

    Read the article

  • How to put large text data (~20mb) into sql cs 3.5 database?

    - by Anindya Chatterjee
    I am using following query to insert some large text data : internal static string InsertStorageItem = "insert into Storage(FolderName, MessageId, MessageDate, StorageData) values ('{0}', '{1}', '{2}', @StorageData)"; and the code I am using to execute this query is as follows : string content = "very very large data"; string query = string.Format(InsertStorageItem, "Inbox", "AXOGTRR1445/DSDS587444WEE", "4/19/2010 11:11:03 AM"); var command = new SqlCeCommand(query, _sqlConnection); var paramData = command.Parameters.Add("@StorageData", System.Data.SqlDbType.NText); paramData.Value = content; paramData.SourceColumn = "StorageData"; command.ExecuteNonQuery(); But at the last line I am getting this following error : System.Data.SqlServerCe.SqlCeException was unhandled by user code Message=The data was truncated while converting from one data type to another. [ Name of function(if known) = ] Source=SQL Server Compact ADO.NET Data Provider HResult=-2147467259 NativeError=25920 StackTrace: at System.Data.SqlServerCe.SqlCeCommand.ProcessResults(Int32 hr) at System.Data.SqlServerCe.SqlCeCommand.ExecuteCommandText(IntPtr& pCursor, Boolean& isBaseTableCursor) at System.Data.SqlServerCe.SqlCeCommand.ExecuteCommand(CommandBehavior behavior, String method, ResultSetOptions options) at System.Data.SqlServerCe.SqlCeCommand.ExecuteNonQuery() at Chithi.Client.Exchange.ExchangeClient.SaveItem(Item item, Folder parentFolder) at Chithi.Client.Exchange.ExchangeClient.DownloadNewMails(Folder folder) at Chithi.Client.Exchange.ExchangeClient.SynchronizeParentChildFolder(WellKnownFolder wellknownFolder, Folder parentFolder) at Chithi.Client.Exchange.ExchangeClient.SynchronizeFolders() at Chithi.Client.Exchange.ExchangeClient.WorkerThreadDoWork(Object sender, DoWorkEventArgs e) at System.ComponentModel.BackgroundWorker.OnDoWork(DoWorkEventArgs e) at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) InnerException: Now my question is how am I supposed to insert such large data to sqlce db? Regards, Anindya Chatterjee http://abstractclass.org

    Read the article

  • Retrieving dll version info via Win32 - VerQueryValue(...) crashes under Win7 x64

    - by user256890
    The respected open source .NET wrapper implementation (SharpBITS) of Windows BITS services fails identifying the underlying BITS version under Win7 x64. Here is the source code that fails. NativeMethods are native Win32 calls wrapped by .NET methods and decorated via DllImport attribute. private static BitsVersion GetBitsVersion() { try { string fileName = Path.Combine( System.Environment.SystemDirectory, "qmgr.dll"); int handle = 0; int size = NativeMethods.GetFileVersionInfoSize(fileName, out handle); if (size == 0) return BitsVersion.Bits0_0; byte[] buffer = new byte[size]; if (!NativeMethods.GetFileVersionInfo(fileName, handle, size, buffer)) { return BitsVersion.Bits0_0; } IntPtr subBlock = IntPtr.Zero; uint len = 0; if (!NativeMethods.VerQueryValue(buffer, @"\VarFileInfo\Translation", out subBlock, out len)) { return BitsVersion.Bits0_0; } int block1 = Marshal.ReadInt16(subBlock); int block2 = Marshal.ReadInt16((IntPtr)((int)subBlock + 2 )); string spv = string.Format( @"\StringFileInfo\{0:X4}{1:X4}\ProductVersion", block1, block2); string versionInfo; if (!NativeMethods.VerQueryValue(buffer, spv, out versionInfo, out len)) { return BitsVersion.Bits0_0; } ... The implementation follows the MSDN instructions by the letter. Still during the second VerQueryValue(...) call, the application crashes and kills the debug session without hesitation. Just a little more debug info right before the crash: spv = "\StringFileInfo\040904B0\ProductVersion" buffer = byte[1900] - full with binary data block1 = 1033 block2 = 1200 I looked at the targeted "C:\Windows\System32\qmgr.dll" file (The implementation of BITS) via Windows. It says that the Product Version is 7.5.7600.16385. Instead of crashing, this value should return in the verionInfo string. Any advice?

    Read the article

  • How to organize integrity tests and code unit tests?

    - by karlthorwald
    I have several files with code testing code (which uses a "unittest" class). Later I found it would be nice to test database integrity also. I put this into a separate directory tree. (Things like the keys have correct format, parent and child nodes are pointing correctly and such.) I use the same unittest class for the integrity tests. Now I wonder if it makes really sense to keep this separate. To test the integrity of data I often duplicate parts of code that I use to test the code that handles the data. But it is not the same. The code tests use test databases (that get deleted after each test) and the integrity tests connect to the live data and analyze it. The integrity tests I want to call from cron and send an alarm if something happens in the live database. How would you handle that? Are there standards for such a setup? What is your experience? My tendency is to put everything in the same file, which would result in the code tests also being executed by the cron on the production environment.

    Read the article

  • String Manipulation: Spliting Delimitted Data

    - by Milli Szabo
    I need to split some info from a asterisk delimitted data. Data Format: NAME*ADRESS LINE1*ADDRESS LINE2 Rules: 1. Name should be always present 2. Address Line 1 and 2 might not be 3. There should be always three asterisks. Samples: MR JONES A ORTEGA*ADDRESS 1*ADDRESS2* Name: MR JONES A ORTEGA Address Line1: ADDRESS 1 Address Line2: ADDRESS 2 A PAUL*ADDR1** Name: A PAUL Address Line1: ADDR1 Address Line2: Not Given My algo is: 1. Iterate through the characters in the line 2. Store all chars in a temp variables until first * is found. Reject the data if no char is found before first occurence of asterisk. If some chars found, use it as the name. 3. Same as step 2 for finding address line 1 and 2 except that this won't reject the data if no char is found My algo looks ugly. The code looks uglier. Spliting using //* doesn't work either since name can be replaced with address line 1 If the data is *Address 1*Address2, split will create two indexes in the array where index 0 will have the value of Address 1 and index 2 will have the value of Address2. Where's the name. Was there a name? Any suggestion?

    Read the article

  • % style macros not supported in some C++/CLI project property pages under VS2010?

    - by Dave Foster
    We're currently evaluating VS2010 and have upgraded our VS2008 C++/CLI project to the new .vcxproj format. I've noticed that a certain property we had set in the project settings did not get translated properly. Under Configuration Properties - Managed Resources - Resource Logical Name, we used to have (in VS2008) the setting: $(IntDir)\$(RootNamespace).$(InputName).resources which indicated that all .resx files were to compile into OurLib.SomeForm.resources inside of the assembly. (the Debug portion is dropped when assembled) According to MSDN, the $(InputName) macro no longer exists and should be replaced with %(Filename). However, when translating the above line to swap those macros, it does not seem to ever expand. The second .resx file it tries to compile, I get a "LINK : fatal error LNK1316: duplicate managed resource name 'Debug\OurLib.%(Filename).resources". This indicates to me that the % style macros are not being expanded here, at least in this specific property. If we don't set anything in that property, the default behavior seems to be to add the subdirectory as a prefix, such as: OurLib.Forms.SomeForm.resources where Forms is the subdir of our project that the .resx file lives. This only occurs when the .resx file is in an immediate subdirectory of the project being built. If a .resx file exists somewhere else on disk (aka ..\OtherLib\Forms\SomeForm2.resx) this prefix is NOT added. This is causing an issue with loading form resources, as it does not account for this possible prefix, even though we are using the standard Forms Designer method of getting at resources: System::ComponentModel::ComponentResourceManager^ resources = (gcnew System::ComponentModel::ComponentResourceManager(SomeForm::typeid)); and do not specify the .resources file by name. The issue I've just described may not be the same as the original question, but if I were to fix the Resource Logical Name issue I think this would all go away. Does anyone have any information about these % macros and where they are allowed to be used?

    Read the article

  • Oracle syntax - should we have to choose between the old and the new?

    - by Martin Milan
    Hi, I work on a code base in the region of about 1'000'000 lines of source, in a team of around eight developers. Our code is basically an application using an Oracle database, but the code has evolved over time (we have plenty of source code from the mid nineties in there!). A dispute has arisen amongst the team over the syntax that we are using for querying the Oracle database. At the moment, the overwhelming majority of our queries use the "old" Oracle Syntax for joins, meaning we have code that looks like this... Example of Inner Join select customers.*, orders.date, orders.value from customers, orders where customers.custid = orders.custid Example of Outer Join select customers.custid, contacts.ContactName, contacts.ContactTelNo from customers, contacts where customers.custid = contacts.custid(+) As new developers have joined the team, we have noticed that some of them seem to prefer using SQL-92 queries, like this: Example of Inner Join select customers.*, orders.date, orders.value from customers inner join orders on (customers.custid = orders.custid) Example of Outer Join select customers.custid, contacts.ContactName, contacts.ContactTelNo from customers left join contacts on (customers.custid = contacts.custid) Group A say that everyone should be using the the "old" syntax - we have lots of code in this format, and we ought to value consistency. We don't have time to go all the way through the code now rewriting database queries, and it wouldn't pay us if we had. They also point out that "this is the way we've always done it, and we're comfortable with it..." Group B however say that they agree that we don't have the time to go back and change existing queries, we really ought to be adopting the "new" syntax on code that we write from here on in. They say that developers only really look at a single query at a time, and that so long as developers know both syntax there is nothing to be gained from rigidly sticking to the old syntax, which might be deprecated at some point in the future. Without declaring with which group my loyalties lie, I am interested in hearing the opinions of impartial observers - so let the games commence! Martin. Ps. I've made this a community wiki so as not to be seen as just blatantly chasing after question points...

    Read the article

  • Problem with Deploying a ASP.NET MVC Project on a IIS 7.0. BadImageFormatException

    - by Markus
    Hello world, I am stuck with my web application. As known from the title its a ASP.NET MVC(1,0) application so i do the only 2 things that a needed do deploy a application like this. I made a build an copied it to the IIS Folder. In the IDE (VS2008) all works fine :(. This worked a long time. But know i get a error for my included dll of a other project. (I have a German version so the Error is Translated from google sry for that) BadImageFormatException: File or assembly 'DataService.WebInterface.BusinessLogic "or one of its dependencies was not found. An attempt was made to load a file with an incorrect format.] System.Reflection.Assembly._nLoad (AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark & stackMark, throwOnFileNotFound Boolean, Boolean forIntrospection) +0 System.Reflection.Assembly.InternalLoad (AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark & stackMark, Boolean forIntrospection) +416 System.Reflection.Assembly.InternalLoad (String String assemblyName, Evidence assemblySecurity, StackCrawlMark & stackMark, Boolean forIntrospection) +166 System.Reflection.Assembly.Load (String string assemblyName) +35 System.Web.Configuration.CompilationSection.LoadAssemblyHelper (String assemblyName, Boolean starDirective) +190 What does that mean? Is the File corrupted or do i have to change the web.config? Thank your for your support!

    Read the article

  • using the ASP.NET Caching API via method annotations in C#

    - by craigmoliver
    In C#, is it possible to decorate a method with an annotation to populate the cache object with the return value of the method? Currently I'm using the following class to cache data objects: public class SiteCache { // 7 days + 6 hours (offset to avoid repeats peak time) private const int KeepForHours = 174; public static void Set(string cacheKey, Object o) { if (o != null) HttpContext.Current.Cache.Insert(cacheKey, o, null, DateTime.Now.AddHours(KeepForHours), TimeSpan.Zero); } public static object Get(string cacheKey) { return HttpContext.Current.Cache[cacheKey]; } public static void Clear(string sKey) { HttpContext.Current.Cache.Remove(sKey); } public static void Clear() { foreach (DictionaryEntry item in HttpContext.Current.Cache) { Clear(item.Key.ToString()); } } } In methods I want to cache I do this: [DataObjectMethod(DataObjectMethodType.Select)] public static SiteSettingsInfo SiteSettings_SelectOne_Name(string Name) { var ck = string.Format("SiteSettings_SelectOne_Name-Name_{0}-", Name.ToLower()); var dt = (DataTable)SiteCache.Get(ck); if (dt == null) { dt = new DataTable(); dt.Load(ModelProvider.SiteSettings_SelectOne_Name(Name)); SiteCache.Set(ck, dt); } var info = new SiteSettingsInfo(); foreach (DataRowView dr in dt.DefaultView) info = SiteSettingsInfo_Load(dr); return info; } Is it possible to separate those concerns like so: (notice the new annotation) [CacheReturnValue] [DataObjectMethod(DataObjectMethodType.Select)] public static SiteSettingsInfo SiteSettings_SelectOne_Name(string Name) { var dt = new DataTable(); dt.Load(ModelProvider.SiteSettings_SelectOne_Name(Name)); var info = new SiteSettingsInfo(); foreach (DataRowView dr in dt.DefaultView) info = SiteSettingsInfo_Load(dr); return info; }

    Read the article

  • Perf4J Graph Output from Log File

    - by manyxcxi
    I currently have a long running process that I am trying to analyze with Perf4J. I currently have it writing results in CSV format to its own log file using the AsyncCoalescingStatisticsAppender and a StatisticsCsvLayout on the file appender. My question is; when I try and use the --graph option from the command line (using the perf4j jar) it isn't populating the data points- it isn't populating anything. Are my appenders set incorrectly? The log file contains hundreds (sometimes thousands) of data points of about 10 different tag names. <appender name="perfAppender" class="org.apache.log4j.FileAppender"> <param name="File" value="perfStats.log"/> <layout class="org.perf4j.log4j.StatisticsCsvLayout"> </layout> </appender> <appender name="CoalescingStatistics" class="org.perf4j.log4j.AsyncCoalescingStatisticsAppender"> <!-- The TimeSlice option is used to determine the time window for which all received StopWatch logs are aggregated to create a single GroupedTimingStatistics log. Here we set it to 10 seconds, overriding the default of 30000 ms --> <param name="TimeSlice" value="10000"/> <appender-ref ref="ConsoleAppender"/> <appender-ref ref="CompositeRollingFileAppender"/> <appender-ref ref="perfAppender"/> </appender>

    Read the article

< Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >