Search Results

Search found 2509 results on 101 pages for 'converting'.

Page 81/101 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Use any CSS compiler (Sass, Less) to generate the selector

    - by xckpd7
    So I've recently been playing around with CSS compilers, but I have no idea how (or if it's possible) to dynamically generate pieces of a selector. For instance, let's say I wanted to make mixins to get display: inline-block; to work cross browser. I would have to do the styles, yeah, but I would have to do the IE6/7 selector hacks to get them to work in those browsers too. Ideally I'm looking for a one off thing to add to an element and have the ability for that to work. Some kind person recently gave me this solution: http://stackoverflow.com/questions/2746754/css-compilers-and-converting-ie-hacks-to-conditional-css/2747036#2747036 and it would be nice to implement that in a minimal way that would allow me to specify it for a given element and be on my way (for instance in Less, you can create a class with styles, pass that class to another element, and that element will inherit all of those styles. It would be nice to pass an element .inline-block; and it create the styles needed to support IE6/7 without having to resort to stuff like _color: pink; Any ideas? EDIT: for instance as well, how could I do something like clearfix for LESS? (lesscss.org)? If Sass can only do it then that will work too.

    Read the article

  • What is your best-practice advice on implementing SQL stored procedures (in a C# winforms applicatio

    - by JYelton
    I have read these very good questions on SO about SQL stored procedures: When should you use stored procedures? and Are Stored Procedures more efficient, in general, than inline statements on modern RDBMS’s? I am a beginner on integrating .NET/SQL though I have used basic SQL functionality for more than a decade in other environments. It's time to advance with regards to organization and deployment. I am using .NET C# 3.5, Visual Studio 2008 and SQL Server 2008; though this question can be regarded as language- and database- agnostic, meaning that it could easily apply to other environments that use stored procedures and a relational database. Given that I have an application with inline SQL queries, and I am interested in converting to stored procedures for organizational and performance purposes, what are your recommendations for doing so? Here are some additional questions in my mind related to this subject that may help shape the answers: Should I create the stored procedures in SQL using SQL Management Studio and simply re-create the database when it is installed for a client? Am I better off creating all of the stored procedures in my application, inside of a database initialization method? It seems logical to assume that creating stored procedures must follow the creation of tables in a new installation. My database initialization method creates new tables and inserts some default data. My plan is to create stored procedures following that step, but I am beginning to think there might be a better way to set up a database from scratch (such as in the installer of the program). Thoughts on this are appreciated. I have a variety of queries throughout the application. Some queries are incredibly simple (SELECT id FROM table) and others are extremely long and complex, performing several joins and accepting approximately 80 parameters. Should I replace all queries with stored procedures, or only those that might benefit from doing so? Finally, as this topic obviously requires some research and education, can you recommend an article, book, or tutorial that covers the nuances of using stored procedures instead of direct statements?

    Read the article

  • Editing 8bpp indexed Bitmaps

    - by Pedro Sá
    hi, i'm trying to edit the pixels of a 8bpp. Since this PixelFormat is indexed i'm aware that it uses a Color Table to map the pixel values. Even though I can edit the bitmap by converting it to 24bpp, 8bpp editing is much faster (13ms vs 3ms). But, changing each value when accessing the 8bpp bitmap results in some random rgb colors even though the PixelFormat remains 8bpp. I'm currently developing in c# and the algorithm is as follows: (C#) 1- Load original Bitmap at 8bpp 2- Create Empty temp Bitmap with 8bpp with the same size as the original 3-LockBits of both bitmaps and, using P/Invoke, calling c++ method where I pass the Scan0 of each BitmapData object. (I used a c++ method as it offers better performance when iterating through the Bitmap's pixels) (C++) 4- Create a int[256] palette according to some parameters and edit the temp bitmap bytes by passing the original's pixel values through the palette. (C#) 5- UnlockBits. My question is how can I edit the pixel values without having the strange rgb colors, or even better, edit the 8bpp bitmap's Color Table? Regards, Pedro

    Read the article

  • Which type of file parsing easiest and efficient and good ?(html,pdf,csv,text)

    - by Harikrishna
    I want to parse the html file, pdf file, csv file and text file. Now parsing for which type of file (specified above) is easiest and efficient ? Like parsing for html file is easiest and efficient OR parsing for pdf file is easiest and efficient OR parsing for csv file is easiest and efficient ? I am asking this question because I want to parse pdf ,html ,csv and text file through common parsing code if possible. And now suppose if parsing for html is easiest and efficient then : I will write the parsing code for html file and will try to convert pdf file to the html file(if possible)so the code written for parsing html file will also work for pdf file also. And thus I will try to convert pdf,csv and text file to html file.And write the code for parsing html file and thus this code will parse html,pdf,csv and text file. Suppose if parsing for pdf is easiest and efficient then : I will convert html,csv and text file to pdf and write the code for parsing pdf file.So the code for parsing pdf file can parse html,csv and text file. So my question is (1) Which type of file parsing is easiest and efficient (pdf,csv,html,text) ? (2) And converting files(pdf,text,html,csv) to eachother is possible. Like if html parsing easiest then pdf to html,text to html and csv to html.

    Read the article

  • Data loss when downloading data from LDAP server

    - by Ricky D'Amelio
    Hi there. This question comes from a previous one I asked about handling NSData objects: http://stackoverflow.com/questions/2453785/converting-nsdata-to-an-nsstring-representation-is-failing. I have reached the point where I am taking an NSImage, turning it into NSData and uploading those data bytes to the LDAP server. I am doing this like so; //connected successfully to LDAP server above... struct berval photo_berval; struct berval *jpegPhoto_values[2]; photo_berval.bv_len = [photo length]; photo_berval.bv_val = [photo bytes]; jpegPhoto_values[0] = &photo_berval; jpegPhoto_values[1] = NULL; mod.mod_type = "jpegPhoto"; mod.mod_op = LDAP_MOD_REPLACE|LDAP_MOD_BVALUES; mod.mod_bvalues = jpegPhoto_values; mods[0] = &mod; mods[1] = NULL; //perform the modify operation rc = ldap_modify_ext_s(ld, givenModifyEntry, mods, NULL, NULL); That happens with no errors, and you can see a big blob of data when you're in the command line. My problem is, when I go to access the same data at a later stage, I am getting an image file back that's about 120 times smaller than the original image. //find the jpegPhoto attribute photoA = ldap_first_attribute(ld, photoE, &photoBer); while (strcasecmp(photoA, "jpegphoto") != 0) { photoA = ldap_next_attribute(ld, photoE, photoBer); } //get the value of the attribute if ((list_of_photos = ldap_get_values_len(ld, photoE, photoA)) != NULL) { //get the first JPEG photo_data = *list_of_photos[0]; selectedPictureData = [NSData dataWithBytes:&photo_data length:sizeof(photo_data)]; [selectedPictureData writeToFile:@"/Users/username/Desktop/Photo 2.jpg" atomically:YES]; NSLog (@"%@", selectedPictureData); Has anyone successfully done this before or can anyone see what I might be doing wrong? I appreciate anyone's help. Sorry to post so many questions! Ricky.

    Read the article

  • MVC 2.0 - JqGrid Sorting with Mulitple Tables

    - by Billy Logan
    I am in the process of implementing the jqGrid and would like to be able to use the sorting functionality. I have run into some issues with sorting columns that are related to the base table. Here is the script to load the grid: public JsonResult GetData(GridSettings grid) { try { using (IWE dataContext = new IWE()) { var query = dataContext.LKTYPE.Include("VWEPICORCATEGORY").AsQueryable(); ////sorting query = query.OrderBy<LKTYPE>(grid.SortColumn, grid.SortOrder); //count var count = query.Count(); //paging var data = query.Skip((grid.PageIndex - 1) * grid.PageSize).Take(grid.PageSize).ToArray(); //converting in grid format var result = new { total = (int)Math.Ceiling((double)count / grid.PageSize), page = grid.PageIndex, records = count, rows = (from host in data select new { TYPE_ID = host.TYPE_ID, TYPE = host.TYPE, CR_ACTIVE = host.CR_ACTIVE, description = host.VWEPICORCATEGORY.description }).ToArray() }; return Json(result, JsonRequestBehavior.AllowGet); } } catch (Exception ex) { //send the error email ExceptionPolicy.HandleException(ex, "Exception Policy"); } //have to return something if there is an issue return Json(""); } As you can see the description field is a part of the related table("VWEPICORCATEGORY") and the order by is targeted at LKTYPE. I am trying to figure out how exactly one goes about sorting that particular field or maybe even a better way to implement this grid using multiple tables and it's sorting functionality. Thanks in advance, Billy

    Read the article

  • Extracting bool from istream in a templated function

    - by Thomas Matthews
    I'm converting my fields class read functions into one template function. I have field classes for int, unsigned int, long, and unsigned long. These all use the same method for extracting a value from an istringstream (only the types change): template <typename Value_Type> Value_Type Extract_Value(const std::string& input_string) { std::istringstream m_string_stream; m_string_stream.str(input_string); m_string_stream.clear(); m_string_stream >> value; return; } The tricky part is with the bool (Boolean) type. There are many textual representations for Boolean: 0, 1, T, F, TRUE, FALSE, and all the case insensitive combinations Here's the questions: What does the C++ standard say are valid data to extract a bool, using the stream extraction operator? Since Boolean can be represented by text, does this involve locales? Is this platform dependent? I would like to simplify my code by not writing my own handler for bool input. I am using MS Visual Studio 2008 (version 9), C++, and Windows XP and Vista.

    Read the article

  • COM access to classic ASP intrinsic objects

    - by wrench
    I'm converting a VB6 COM object that works with classic ASP to a c# .Net COM Object Interop_COMSVCS.ObjectContext objContext; Interop_COMSVCS.AppServer objAppServer; objAppServer = null; // need to initialize before using objAppServer = new Interop_COMSVCS.AppServer(); objContext = objAppServer.GetObjectContext(); oApplication = (Interop_ASP.Application)objContext["Application"]; oSession = (Interop_ASP.Session)objContext["Session"]; oResponse = (Interop_ASP.Response)objContext["Response"]; oRequest = (Interop_ASP.Request)objContext["Request"]; oSession works to store local information to.from ASP storage oResponse can do simple writes to the browser BUT any code like oRequest.Cookies["sessionId"] or oResponse.Cookies["sessionId"] doesn't provide any sort of read or write access. Any cast or conversion I trry to do tells me I'm dealing with an empty or null System Object. There doesn't seem to be any sort of syntax to get/set the cookie collection. With COM+ I've seesn soem articles that indcate a switch for Access to ASP Intrinsic Objects -- that seesm to describe my issue, but I'd rather not use COM+. There are also some articles that indicate if I was using ASP.NET I could use HttpContext and HttpRequest/Response, but that's a completely different set of data objects that don't seem to be available with classic ASP. I've been stuck on this fopr a few days. Any help appreciated.

    Read the article

  • How to modernize an enormous legacy database?

    - by smayers81
    I have a question, just looking for suggestions here. So, my application is 'modernizing' a desktop application by converting it to the web, with an ICEFaces UI and server side written in Java. However, they are keeping around the same Oracle database, which at current count has about 700-900 tables and probably a billion total records in the tables. Some individual tables have 250 million rows, many have over 25 million. Needless to say, the database is not scaling well. As a result, the performance of the application is looking to be abysmal. The architects / decision makers-that-be have all either refused or are unwilling to restructure the persistence. So, basically we are putting a fresh coat of paint on a functional desktop application that currently serves most user needs and does so with relative ease and quick performance. I am having trouble sleeping at night thinking of how poorly this application is going to perform and how difficult it is going to be for everyday users to do their job. So, my question is, what options do I have to mitigate this impending disaster? Is there some type of intermediate layer I can put in between the database and the Java code to speed up performance while at the same time keeping the database structure intact? Caching is obviously an option, but I don't see that as being a cure-all. Is it possible to layer a NoSQL DB in between or something?

    Read the article

  • Passing object(s) to a Controller Action

    - by Nicholas H
    I'm attempting to use jQuery to do a $.post() to an MVC controller action. Here's the jQuery calls: var _authTicket = { username: 'testingu', password: 'testingp' }; function DoPost() { var inputObj = { authTicket: _authTicket, dateRange: 'ThisWeek' }; $.post('/MyController/MyAction', inputObj, PostResult); } function PostResult(data) { alert(JSON.stringify(data)); } Here's the controller action: <HttpPost()> _ Function MyAction(ByVal authTicket As AuthTicket, ByVal dateRange As String) As ActionResult Dim u = ValidateUser(authTicket) If u Is Nothing Then Throw New Exception("User not valid") ' etc.. End Function The problem is, MyAction's "authTicket" parameter is all empty. The second "dateRange" parameter gets populated just fine. I'm pretty sure that the issue stems from jQuery turning the data into key/value pairs to POST to the action. MVC must not understand the format, at least when it's an object with it's own properties. The name/value pair format that jQuery is converting to is like: authTicket[username] = "testingu" authTicket[password] = "testingp" Which in turn gets made into x-www-form-urlencoded'd data: authTicket%5Busername%5D=testingu &authTicket%5Bpassword%5D=testingp I guess I could always have "username" and "password" properties in my actions, but that seems wrong to me. Any help would be appreciated.

    Read the article

  • Synchronize DataGrid and DataForm in Silverlight 3

    - by SpiralGray
    I've been banging my head against the wall for a couple of days on this and it's time to ask for help. I've got a DataGrid and DataForm on the same UserControl. I'm using an MVVM approach so there is a single ViewModel for the UserControl. That ViewModel has a couple of properties that are relevant to this discussion: public ObservableCollection<VehicleViewModel> Vehicles { get; private set; } public VehicleViewModel SelectedVehicle { get { return selectedVehicle; } private set { selectedVehicle = value; OnPropertyChanged( "SelectedVehicle" ); } } In the XAML I've got the DataGrid and DataForm defined as follows: <data:DataGrid SelectionMode="Single" ItemsSource="{Binding Vehicles}" SelectedItem="{Binding SelectedVehicle, Mode=TwoWay}" AutoGenerateColumns="False" IsReadOnly="True"> <dataFormToolkit:DataForm CurrentItem="{Binding SelectedVehicle}" /> So as the SelectedItem changes on the DataGrid it should push that change back to the ViewModel and when the ViewModel raises the OnPropertyChanged the DataForm should refresh itself with the information for the newly-selected VehicleViewModel. However, the setter for SelectedVehicle is never being called and in the Output window of VS I'm seeing the following error: System.Windows.Data Error: ConvertBack cannot convert value 'xxxx.ViewModel.VehicleViewModel' (type 'xxxx.ViewModel.VehicleViewModel'). BindingExpression: Path='SelectedVehicle' DataItem='xxxx.ViewModel.MainViewModel' (HashCode=31664161); target element is 'System.Windows.Controls.DataGrid' (Name=''); target property is 'SelectedItem' (type 'System.Object').. System.MethodAccessException: xxxx.ViewModel.MainViewModel.set_SelectedVehicle(xxxx.ViewModel.VehicleViewModel) It sounds like it's having a problem converting from a VehicleViewModel to an object (or back again), but I'm confused as to why that would be (or even if I'm on the right track with that assumption). Each row/item in the DataGrid should be a VehicleViewModel (because the ItemsSource is bound to an ObservableCollection of that type), so when the SelectedItem changes it should be dealing with an instance of VehicleViewModel. Any insight would be appreciated.

    Read the article

  • I am getting a Radix out of range exception on performing decryption

    - by user3672391
    I am generating a keypair and converting one of the same into string which later is inserted into the database using the following code: KeyPairGenerator keyGen = KeyPairGenerator.getInstance("RSA"); keyGen.initialize(2048); KeyPair generatedKeyPair = keyGen.genKeyPair(); PublicKey pubkey = generatedKeyPair.getPublic(); PrivateKey prvkey = generatedKeyPair.getPrivate(); System.out.println("My Public Key>>>>>>>>>>>"+pubkey); System.out.println("My Private Key>>>>>>>>>>>"+prvkey); String keyAsString = new BigInteger(prvkey.getEncoded()).toString(64); I then retrieve the string from the database and convert it back to the original key using the following code (where rst is my ResultSet): String keyAsString = rst.getString("privateKey").toString(); byte[] bytes = new BigInteger(keyAsString, 64).toByteArray(); //byte k[] = "HignDlPs".getBytes(); PKCS8EncodedKeySpec encodedKeySpec = new PKCS8EncodedKeySpec(bytes); KeyFactory rsaKeyFac = KeyFactory.getInstance("RSA"); PrivateKey privKey = rsaKeyFac.generatePrivate(encodedKeySpec); On using the privKey for RSA decryption, I get the following exception java.lang.NumberFormatException: Radix out of range at java.math.BigInteger.<init>(BigInteger.java:294) at com.util.SimpleFTPClient.downloadFile(SimpleFTPClient.java:176) at com.Action.FileDownload.processRequest(FileDownload.java:64) at com.Action.FileDownload.doGet(FileDownload.java:94) Please guide.

    Read the article

  • C# Problem with getPixel & setting RTF text colour accordingly

    - by m3n
    Heyo, I'm messing with converting images to ASCII ones. For this I load the image, use getPixel() on each pixel, then change insert a character with that colour into a richTextBox. Bitmap bmBild = new Bitmap(openFileDialog1.FileName.ToString()); // valid image int x = 0, y = 0; for (int i = 0; i <= (bmBild.Width * bmBild.Height - bmBild.Height); i++) { // Ändra text här richTextBox1.Text += "x"; richTextBox1.Select(i, 1); if (bmBild.GetPixel(x, y).IsKnownColor) { richTextBox1.SelectionColor = bmBild.GetPixel(x, y); } else { richTextBox1.SelectionColor = Color.Red; } if (x >= (bmBild.Width -1)) { x = 0; y++; richTextBox1.Text += "\n"; } x++; } GetPixel does return the correct colour, but the text only end up black. If I change this richTextBox1.SelectionColor = bmBild.GetPixel(x, y); to this richTextBox1.SelectionColor = Color.Red; It works fine. Why am I not getting the right colours? (I know it doesn't do the new lines properly, but I thought I'd get to the bottom of this issue first.) Thanks

    Read the article

  • Event Handling for MFC Dialog

    - by Maksud
    This is my second question of the day, pardon me. I am writing a wrapper library to communicate with a scanner device. The source code was in C++ MFC. I am converting it to a plain Dll which will be invoked from C#. So, I am using DllImport in C# to call the wrapper library. Now I am provided with MFC code and the library is a ActiveX Object, at least I think so. class CDpocx : public CWnd { } So in my wrapper library I will have an instance of CDpocx and will call it via C# P/Invoke. But the problem is CDpocx also throws some events which I need to catch. In traditional app, I would just attach an function with it. But How would I attach the events on non MFC class. I have seen something like: BEGIN_EVENTSINK_MAP(CVC60Dlg, CDialog) //{{AFX_EVENTSINK_MAP(CVC60Dlg) ON_EVENT(CVC60Dlg, IDC_DPOCXCTRL1, 1 , OnReadyDpocxctrl1, VTS_NONE) //}}AFX_EVENTSINK_MAP END_EVENTSINK_MAP() OnReadyDpocxctrl1 is the function that handles 1 (Ready) event. How can I gain simmilar function in non MFC class. Regards, Maksud

    Read the article

  • Optimizing code using PIL

    - by freakazo
    Firstly sorry for the long piece of code pasted below. This is my first time actually having to worry about performance of an application so I haven't really ever worried about performance. This piece of code pretty much searches for an image inside another image, it takes 30 seconds to run on my computer, converting the images to greyscale and other changes shaved of 15 seconds, I need another 15 shaved off. I did read a bunch of pages and looked at examples but I couldn't find the same problems in my code. So any help would be greatly appreciated. From the looks of it (cProfile) 25 seconds is spent within the Image module, and only 5 seconds in my code. from PIL import Image import os, ImageGrab, pdb, time, win32api, win32con import cProfile def GetImage(name): name = name + '.bmp' try: print(os.path.join(os.getcwd(),"Images",name)) image = Image.open(os.path.join(os.getcwd(),"Images",name)) except: print('error opening image;', name) return image def Find(name): image = GetImage(name) imagebbox = image.getbbox() screen = ImageGrab.grab() #screen = Image.open(os.path.join(os.getcwd(),"Images","Untitled.bmp")) YLimit = screen.getbbox()[3] - imagebbox[3] XLimit = screen.getbbox()[2] - imagebbox[2] image = image.convert("L") Screen = screen.convert("L") Screen.load() image.load() #print(XLimit, YLimit) Found = False image = image.getdata() for y in range(0,YLimit): for x in range(0,XLimit): BoxCoordinates = x, y, x+imagebbox[2], y+imagebbox[3] ScreenGrab = screen.crop(BoxCoordinates) ScreenGrab = ScreenGrab.getdata() if image == ScreenGrab: Found = True #print("woop") return x,y if Found == False: return "Not Found" cProfile.run('print(Find("Login"))')

    Read the article

  • Repackaging Jasper-Reports into an application specific OSGi bundle, legal or not?

    - by Chris
    Hi, I wanted to ask (probably a silly) question regarding the packaging of existing open-source components as OSGi bundles (more specifically Jasper Reports). I have an application that I am converting from a monolithic jar-hell type architecture to something more moduler and OSGi is my weapon of choice. There are various modules I have in mind but one of the modules is a reporting module. My own reporting module will be a jar file containing my code that should reference a Jasper Reports bundle. Trouble is, Jasper reports depends on far far too many libraries and is quite monolithic in its own right. I therefore wish to build my own Jasper Reports bundle but this is where I start getting confused about the legality of repackaging. I don't plan to re-compile but I do plan to re-bundle removing known items that I do not require. Can anyone offer advice on whether I am permitted to repackage (not recompile or extend) open-source libraries into OSGi bundles without falling foul of 'derivative works' clause of LGPL? I noticed that Groovy seems to offer some monolithic jars that include all dependancies and actually goes so far as to re-arrange the packages of its dependancies so that there are no namespace conflicts. This seems to me to be a violation of the license but if anyone can reassure me that this is legal then I would feel safer about my less intrusive custom-bundling of Jasper reports. Thanks for your time, Chris

    Read the article

  • How would I write the following applescript in Obj-C AppScript? ASTranslate was of no help =(

    - by demonslayer319
    The translation tool isn't able to translate this working code. I copied it out of a working script. set pathToTemp to (POSIX path of ((path to desktop) as string)) -- change jpg to pict tell application "Image Events" try launch set albumArt to open file (pathToTemp & "albumart.jpg") save albumArt as PICT in file (pathToTemp & "albumart.pict") --the first 512 bytes are the PICT header, so it reads from byte 513 --this is to allow the image to be added to an iTunes track later. set albumArt to (read file (pathToTemp & "albumart.pict") from 513 as picture) close end try end tell The code is taking a jpg image, converting it to a PICT file, and then reading the file minus the header (the first 512 bytes). Later in the script, albumArt will be added to an iTunes track. I tried translating the code (minus the comments), but ASTranslate froze for a good 2 minutes before giving me this: Untranslated event 'earsffdr' #import "IEGlue/IEGlue.h" IEApplication *imageEvents = [IEApplication applicationWithName: @"Image Events"]; IELaunchCommand *cmd = [[imageEvents launch] ignoreReply]; id result = [cmd send]; #import "IEGlue/IEGlue.h" IEApplication *imageEvents = [IEApplication applicationWithName: @"Image Events"]; IEReference *ref = [[imageEvents files] byName: @"/Users/Doom/Desktop/albumart.jpg"]; id result = [[ref open] send]; #import "IEGlue/IEGlue.h" IEApplication *imageEvents = [IEApplication applicationWithName: @"Image Events"]; IEReference *ref = [[imageEvents images] byName: @"albumart.jpg"]; IESaveCommand *cmd = [[[ref save] in: [[imageEvents files] byName: @"/Users/Doom/Desktop/albumart.pict"]] as: [IEConstant PICT]]; id result = [cmd send]; 'crdwrread' Traceback (most recent call last): File "objcrenderer.pyc", line 283, in renderCommand KeyError: 'crdwrread' 'cascrgdut' Traceback (most recent call last): File "objcrenderer.pyc", line 283, in renderCommand KeyError: 'cascrgdut' 'crdwrread' Traceback (most recent call last): File "objcrenderer.pyc", line 283, in renderCommand KeyError: 'crdwrread' Untranslated event 'rdwrread' OK I have no clue how to make sense of this. Thanks for any and all help!

    Read the article

  • What makes these two R data frames not identical?

    - by Matt Parker
    UPDATE: I remembered dput() about the time Sharpie mentioned it. It's probably the row names. Back in a moment with an answer. I have two small data frames, this_tx and last_tx. They are, in every way that I can tell, completely identical. this_tx == last_tx results in a frame of identical dimensions, all TRUE. this_tx %in% last_tx, two TRUEs. Inspected visually, clearly identical. But when I call identical(this_tx, last_tx) I get a FALSE. Hilariously, even identical(str(this_tx), str(last_tx)) will return a TRUE. If I set this_tx <- last_tx, I'll get a TRUE. What is going on? I don't have the deepest understanding of R's internal mechanics, but I can't find a single difference between the two data frames. If it's relevant, the two variables in the frames are both factors - same levels, same numeric coding for the levels, both just subsets of the same original data frame. Converting them to character vectors doesn't help. Background (because I wouldn't mind help on this, either): I have records of drug treatments given to patients. Each treatment record essentially specifies a person and a date. A second table has a record for each drug and dose given during a particular treatment (usually, a few drugs are given each treatment). I'm trying to identify contiguous periods during which the person was taking the same combinations of drugs at the same doses. The best plan I've come up with is to check the treatments chronologically. If the combination of drugs and doses for treatment[i] is identical to the combination at treatment[i-1], then treatment[i] is a part of the same phase as treatment[i-1]. Of course, if I can't compare drug/dose combinations, that's right out.

    Read the article

  • What file format can represent an uncompressed raster image at 48 or 64 bits per pixel?

    - by finnw
    I am creating screenshots under Windows and using the LockBits function from GDI+ to extract the pixel data, which will then be written to a file. To maximise performance I am also: Using the same PixelFormat as the source bitmap, to avoid format conversion Using the ImageLockModeUserInputBuf flag to extract the pixel data into a pre-allocated buffer This pre-allocated buffer (pointed to by BitmapData::Scan0) is part of a memory-mapped file (to avoid copying the pixel data again.) I will also be writing the code that reads the file, so I can use (or invent) any format I wish. However I would prefer to use a well-known format that existing programs (ideally web browsers) are able to read, because that means I can visually confirm that the images are correct before writing the code for the other program (that reads the image.) I have implemented this successfully for the PixelFormat32bppRGB format, which matches the format of a 32bpp BMP file, so if I extract the pixel data directly into the memory-mapped BMP file and prefix it with a BMP header I get a valid BMP image file that can be opened in Paint and most browsers. Unfortunately one of the machines I am testing on returns pixels in PixelFormat64bppPARGB format (presumably this is influenced by the video adapter driver) and there is no corresponding BMP pixel format for this. Converting to a 16, 24 or 32bpp BMP format slows the program down considerably (as well as being lossy) so I am looking for a file format that can use this pixel format without conversion, so I can extract directly into the memory-mapped file as I have done with the 32bpp format. What raster image file formats support 48bpp and/or 64bpp?

    Read the article

  • rails, mysql charsets & encoding: binary

    - by Benjamin Vetter
    Hi, i've a rails app that runs using utf-8. It uses a mysql database, all tables with mysql's default charset and collation (i.e. latin1). Therefore the latin1 tables contain utf-8 data. Sure, that's not nice, but i'm not really interested in it. Everything works fine, because the connection encoding is latin1 as well and therefore mysql does not convert between charsets. Only one problem: i need a utf-8 fulltext index for one table: mysql> show create table autocompletephrases; ... AUTO_INCREMENT=310095 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci But: I don't want to convert between charsets in my rails app. Therefore I would like to know if i could just set config/database.yml production: adapter: mysql >>>> encoding: binary ... which just calls SET NAMES 'binary' when connecting to mySQL. It looks like it works for my case, because i guess it forces mysql to -not- convert between charsets (mySQL docs). Does anyone knows about problems about doing this? Any side-effects? Or do you have any other suggestions? But i'd like to avoid converting my whole database to utf-8. Many Thanks! Benjamin

    Read the article

  • Persisting Joda DateTime instead of Java Date in hibernate

    - by Tauren
    My entities currently contain java Date properties. I'm starting to use Joda Time for date manipulation and calculations quite frequently. This means that I'm constantly having to convert my Dates into Joda DateTime objects and back again. So I was wondering, is there any reason I shouldn't just change my entities to store Joda DateTime objects instead of Java Date objects? Please note that these entities are persisted via Hibernate. I found the jodatime-hibernate project, but I also was reading on the Joda mailing list that it wasn't compatible with newer versions of hibernate. And it seems like it isn't very well maintained. So I'm wondering if it would be best to just continue converting between Date and DateTime, or if it would be wise to start persisting DateTime objects. My concern is being reliant on a poorly maintained library. Edit: Note that one of my objectives is to be better able to store timezone information. Storing just a Date appears to save the date in the local timezone. As my application can be used globally, I need to know the timezone as well. Joda Time Hibernate seems to address this as well in the user guide.

    Read the article

  • How to Verify Signature, Loading PUBLIC KEY From PEM file?

    - by bbirtle
    I'm posting this in the hope it saves somebody else the hours I lost on this really stupid problem involving converting formats of public keys. If anybody sees a simpler solution or a problem, please let me know! The eCommerce system I'm using sends me some data along with a signature. They also give me their public key in .pem format. The .pem file looks like this: -----BEGIN PUBLIC KEY----- MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDe+hkicNP7ROHUssGNtHwiT2Ew HFrSk/qwrcq8v5metRtTTFPE/nmzSkRnTs3GMpi57rBdxBBJW5W9cpNyGUh0jNXc VrOSClpD5Ri2hER/GcNrxVRP7RlWOqB1C03q4QYmwjHZ+zlM4OUhCCAtSWflB4wC Ka1g88CjFwRw/PB9kwIDAQAB -----END PUBLIC KEY----- Here's the magic code to turn the above into an "RSACryptoServiceProvider" which is capable of verifying the signature. Uses the BouncyCastle library, since .NET apparently (and appallingly cannot do it without some major headaches involving certificate files): RSACryptoServiceProvider thingee; using (var reader = File.OpenText(@"c:\pemfile.pem")) { var x = new PemReader(reader); var y = (RsaKeyParameters)x.ReadObject(); thingee = (RSACryptoServiceProvider)RSACryptoServiceProvider.Create(); var pa = new RSAParameters(); pa.Modulus = y.Modulus.ToByteArray(); pa.Exponent = y.Exponent.ToByteArray(); thingee.ImportParameters(pa); } And then the code to actually verify the signature: var signature = ... //reads from the packet sent by the eCommerce system var data = ... //reads from the packet sent by the eCommerce system var sha = new SHA1CryptoServiceProvider(); byte[] hash = sha.ComputeHash(Encoding.ASCII.GetBytes(data)); byte[] bSignature = Convert.FromBase64String(signature); ///Verify signature, FINALLY: var hasValidSig = thingee.VerifyHash(hash, CryptoConfig.MapNameToOID("SHA1"), bSignature);

    Read the article

  • decimal.TryParse() drops leading "1"

    - by Martin Harris
    Short and sweet version: On one machine out of around a hundred test machines decimal.TryParse() is converting "1.01" to 0.01 Okay, this is going to sound crazy but bare with me... We have a client applications that communicates with a webservice through JSON, and that service returns a decimal value as a string so we store it as a string in our model object: [DataMember(Name = "value")] public string Value { get; set; } When we display that value on screen it is formatted to a specific number of decimal places. So the process we use is string - decimal then decimal - string. The application is currently undergoing final testing and is running on more than 100 machines, where this all works fine. However on one machine if the decimal value has a leading '1' then it is replaced by a zero. I added simple logging to the code so it looks like this: Log("Original string value: {0}", value); decimal val; if (decimal.TryParse(value, out val)) { Log("Parsed decimal value: {0}", val); string output = val.ToString(format, CultureInfo.InvariantCulture.NumberFormat); Log("Formatted string value: {0}", output); return output; } On my machine - any every other client machine - the logfile output is: Original string value: 1.010000 Parsed decimal value: 1.010000 Formatted string value: 1.01 On the defective machine the output is: Original string value: 1.010000 Parsed decimal value: 0.010000 Formatted string value: 0.01 So it would appear that the decimal.TryParse method is at fault. Things we've tried: Uninstalling and reinstalling the client application Uninstalling and reinstalling .net 3.5 sp1 Comparing the defective machine's regional settings for numbers (using English (United Kingdom)) to those of a working machine - no differences. Has anyone seen anything like this or has any suggestions? I'm quickly running out of ideas... While I was typing this some more info came in: Passing a string value of "10000" to Convert.ToInt32() returns 0, so that also seems to drop the leading 1.

    Read the article

  • Please Help Me with my Homework Problem in C++

    - by sil3nt
    Hey there, this is part of a question i got in class, im at the final stretch but this has become a major problem. In it im given a certain value which is called the "gold value" and it is 40.5, this value changes in input. and i have these constants const int RUBIES_PER_DIAMOND = 5; // relative values. * const int EMERALDS_PER_RUBY = 2; const int GOLDS_PER_EMERALDS = 5; const int SILVERS_PER_GOLD = 4; const int COPPERS_PER_SILVER = 5; const int DIAMOND_VALUE = 50; // gold values. * const int RUBY_VALUE = 10; const int EMERALD_VALUE = 5; const float SILVER_VALUE = 0.25; const float COPPER_VALUE = 0.05; which means that basically for every diamond there are 5 rubies, and for every ruby there are 2 emeralds. So on and so forth. and the "gold value" for every diamond for example is 50 (diamond value = 50) this is how much one diamond is worth in golds. my problem is converting 40.5 into these diamonds and ruby values. I know the answer is 4rubies and 2silvers but how do i write the algorithm for this so that it gives the best estimate for every goldvalue that comes along??

    Read the article

  • Java SortedMap to Scala TreeMap

    - by Dave
    I'm having trouble converting a java SortedMap into a scala TreeMap. The SortedMap comes from deserialization and needs to be converted into a scala structure before being used. Some background, for the curious, is that the serialized structure is written through XStream and on desializing I register a converter that says anything that can be assigned to SortedMap[Comparable[_],_] should be given to me. So my convert method gets called and is given an Object that I can safely cast because I know it's of type SortedMap[Comparable[_],_]. That's where it gets interesting. Here's some sample code that might help explain it. // a conversion from comparable to ordering scala> implicit def comparable2ordering[A <: Comparable[A]](x: A): Ordering[A] = new Ordering[A] { | def compare(x: A, y: A) = x.compareTo(y) | } comparable2ordering: [A <: java.lang.Comparable[A]](x: A)Ordering[A] // jm is how I see the map in the converter. Just as an object. I know the key // is of type Comparable[_] scala> val jm : Object = new java.util.TreeMap[Comparable[_], String]() jm: java.lang.Object = {} // It's safe to cast as the converter only gets called for SortedMap[Comparable[_],_] scala> val b = jm.asInstanceOf[java.util.SortedMap[Comparable[_],_]] b: java.util.SortedMap[java.lang.Comparable[_], _] = {} // Now I want to convert this to a tree map scala> collection.immutable.TreeMap() ++ (for(k <- b.keySet) yield { (k, b.get(k)) }) <console>:15: error: diverging implicit expansion for type Ordering[A] starting with method Tuple9 in object Ordering collection.immutable.TreeMap() ++ (for(k <- b.keySet) yield { (k, b.get(k)) })

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >