Search Results

Search found 12765 results on 511 pages for 'format'.

Page 433/511 | < Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >

  • python crc32 woes

    - by lazyr
    I'm writing a python program to extract data from the middle of a 6 GB bz2 file. A bzip2 file is made up of independently decryptable blocks of data, so I only need to find a block (they are delimited by magic bits), then create a temporary one-block bzip2 file from it in memory, and finally pass that to the bz2.decompress function. Easy, no? The bzip2 format has a crc32 checksum for the file at the end. No problem, binascii.crc32 to the rescue. But wait. The data to be checksummed does not necessarily end on a byte boundary, and the crc32 function operates on a whole number of bytes. My plan: use the binascii.crc32 function on all but the last byte, and then a function of my own to update the computed crc with the last 1-7 bits. But hours of coding and testing has left me bewildered, and my puzzlement can be boiled down to this question: how come crc32("\x00") is not 0x00000000? Shouldn't it be, according to the wikipedia article? You start with 0b00000000 and pad with 32 0's, then do polynomial division with 0x04C11DB7 until there are no ones left in the first 8 bits, which is immediately. Your last 32 bits is the checksum, and how can that not be all zeroes? I've searched google for answers and looked at the code of several crc32 implementations without finding any clue to why this is so.

    Read the article

  • Blob byte array in XML to Image

    - by Rayvr
    Hi, I am getting a XML File to generate a preview, in a format like this: <PARAM> <LABEL>Preview 16x16</LABEL> <ID>{03F5C6D3-ABCD-4889-B3AA-C3524C62FA1C}</ID> <LAYER>-1</LAYER> <VALUE> <BLOB> /9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAUDBAQEAwUEBAQFBQUGBwwIBwcHBw8LCwkMEQ8S EhEPERETFhwXExQaFRERGCEYGh0dHx8fExciJCIeJBweHx7/2wBDAQUFBQcGBw4ICA4eFBEU Hh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh7/wAAR CAAOABADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAA AgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkK FhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWG h4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl 5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREA AgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYk NOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOE hYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk 5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwDP+IXxH+Ilr4/1DRrLxZqUrjUTbMLKdtsz eZgLCgHyDHy4GSeee9dT4Z8G/FjxHrmla3Y3bRlRtl1iW9ztGTnlXLS47YG0nrXQa78Bda07 U7jXNM1nS9yGVhJLHJvk8wkEkchWAYjcuDzxggGuB8K6v49i8X28Wg+IodMkaUQFI4sQSEt1 eM5B42qOPlVVVcAV5M8NKdSMW/cV3vrfXyb/AB/I+4+sVUnPCUVeyV3Z3010923krdPU/9k= </BLOB> </VALUE> </PARAM> I need to convert the <BLOB> section into an Image. I access the Element Value like this: string clean = valueC.ElementAt(0).Value.Replace("\t", string.Empty).Replace("\n", string.Empty); I've tried to read it into a MemoryStream and convert to Image: MemoryStream ms = new MemoryStream(blob, 0, blob.Length); ms.Write(blob, 0, blob.Length); Image i = Image.FromStream(ms); In this way I get "Parameter not valid exception" at getting the Image. I've also tried to save it directly into a File: using (FileStream fs = new FileStream(label + ".jpg", FileMode.Create)) { fs.Write(blob, 0, blob.Length); } But when I try to open the generated file, displays a message about damage in it. I know that encoding is important, I've already tried ASCII, UTF-8, UTF-7 and this: BinaryFormatter bf = new BinaryFormatter(); MemoryStream ms = new MemoryStream(); bf.Serialize(ms, clean); ms.Seek(0, 0); byte[] blob = ms.ToArray(); I dont know what else to do. I'll appreciate if somebody can help me. Thanks

    Read the article

  • JQuery autocomplete problem

    - by heffaklump
    Im using JQuerys Autocomplete plugin, but it doesn't autocomplete upon entering anything. Any ideas why it doesnt work? The basic example works, but not mine. var ppl = {"ppl":[{"name":"peterpeter", "work":"student"}, {"name":"piotr","work":"student"}]}; var options = { matchContains: true, // So we can search inside string too minChars: 2, // this sets autocomplete to begin from X characters dataType: 'json', parse: function(data) { var parsed = []; data = data.ppl; for (var i = 0; i < data.length; i++) { parsed[parsed.length] = { data: data[i], // the entire JSON entry value: data[i].name, // the default display value result: data[i].name // to populate the input element }; } return parsed; }, // To format the data returned by the autocompleter for display formatItem: function(item) { return item.name; } }; $('#inputplace').autocomplete(ppl, options); Ok. Updated: <input type="text" id="inputplace" /> So, when entering for example "peter" in the input field. No autocomplete suggestions appear. It should give "peterpeter" but nothing happens. And one more thing. Using this example works perfectly. var data = "Core Selectors Attributes Traversing Manipulation CSS Events Effects Ajax Utilities".split(" "); $("#inputplace").autocomplete(data);

    Read the article

  • How can I write an XML on my hard drive to GetRequestStream

    - by swolff1978
    I need to post raw xml to a site and read the response. With the following code I keep getting an "Unknown File Format" error and I'm not sure why. XmlDocument sampleRequest = new XmlDocument(); sampleRequest.Load(@"C:\SampleRequest.xml"); byte[] bytes = Encoding.UTF8.GetBytes(sampleRequest.ToString()); string uri = "https://www.sample-gateway.com/gw.aspx"; req = WebRequest.Create(uri); req.Method = "POST"; req.ContentLength = bytes.Length; req.ContentType = "text/xml"; using (var requestStream = req.GetRequestStream()) { requestStream.Write(bytes, 0, bytes.Length); } // Send the data to the webserver rsp = req.GetResponse(); XmlDocument responseXML = new XmlDocument(); using (var responseStream = rsp.GetResponseStream()) { responseXML.Load(responseStream); } I am fairly certain my issue is what/how I am writing to the requestStream so.. How can I modify that code so that I may write an xml located on the hard drive to the request stream?

    Read the article

  • How to get spacing between characters printed using TextOut ?

    - by life-warrior
    I'm trying to calcuate size of each cell (containing text like "ff" or "a0"), so that 32 cells will fit into window by width. However, charWidth*2 doesn' represent the width of a cell, since it doesn't take spacing between characters in the account. How can I obtain size of a font so that 32 cells each is two chars like "ff" fit exactly into window's client area ? Curier is fixed-width font. RECT rect; ::GetClientRect( hWnd, &rect ); LONG charWidth = (rect.right-rect.left)/BLOCK_SIZE/2-2; int oldMapMode = ::SetMapMode( hdc, MM_TEXT ); HFONT font = CreateFont( charWidth*2, charWidth, 0, 0, FW_DONTCARE, FALSE, FALSE, FALSE, DEFAULT_CHARSET, OUT_OUTLINE_PRECIS, CLIP_DEFAULT_PRECIS, CLEARTYPE_QUALITY, FF_ROMAN, _T("Courier") ); HGDIOBJ oldFont = ::SelectObject( hdc, font ); for( int i = 0; i < BLOCK_SIZE; ++i ) { CString str; str.Format( _T("%.2x"), (unsigned char)*(g_memAddr+i) ); SIZE size; ::TextOut( hdc, (size.cx+2)*i+1, 1, str, _tcslen((LPCTSTR)str) ); }

    Read the article

  • django: results in in_bulk style without IDs

    - by valya
    in django 1.1.1, Place.objects.in_bulk() does not work and Place.objects.in_bulk(range(1, 100)) works and returns a dictionary of Ints to Places with indexes - primary keys. How to avoid using range in this situation (and avoid using a special query for ids, I just want to get all objects in this dictionary format) >>> Place.objects.in_bulk() Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/lib/python2.5/site-packages/Django-1.1.1-py2.5.egg/django/db/models/manager.py", line 144, in in_bulk return self.get_query_set().in_bulk(*args, **kwargs) TypeError: in_bulk() takes exactly 2 arguments (1 given) >>> Place.objects.in_bulk(range(1, 100)) {1L: <Place: "??? ????">, 3L: <Place: "???????????? ?????">, 4L: <Place: "????????? "??????"">, 5L: <Place: "????????? "??????"">, 8L: <Place: "????????? "??????????????"">, 9L: <Place: "??????? ????????">, 10L: <Place: "????????? ???????">, 11L: <Place: "??????????????? ???">, 14L: <Place: "????? ????? ??????">}

    Read the article

  • How to read a parameter passed to a facelet from a backing bean

    - by Antonio
    Hi, I've written a facelet, and a corresponding backing bean, that implements user management (addition, deletion and so on). I'd want to be able to perform some custom processing when, for instance, a new user is added. There is a "create" button in the facelet, whose click event is handled by its backing bean. At the end of the event handler, I'd want to be able to call a method of another backing bean, which is not known because ideally the facelet can be used in several pages, with different custom processing. I thought to implement this feature by providing to the facelet a backing bean name and a method name, like this: <myfacelet:subaccounts backingBean="myBackingBean" createListener="createListener" /> and at the end of the event handler call #{myBackingBean.createListener} someway. I'm using this method (along with some overloads) to obtain a MethodExpression: protected MethodExpression getMethodExpression(String beanName, String methodName, Class<?> expectedReturnType, Class<?>[] expectedParamTypes) { ExpressionFactory expressionFactory; MethodExpression method; ELContext elContext; String el; el = String.format("#{%s['%s']}", beanName, methodName); expressionFactory = getApplication().getExpressionFactory(); elContext = getFacesContext().getELContext(); method = expressionFactory.createMethodExpression(elContext, el, expectedReturnType, expectedParamTypes); return method; } and the click event handler should look like: public void saveSubaccountListener(ActionEvent event) { MethodExpression method; ... method = getMethodExpression( "backingBean", "createSubaccountListener", SubuserBean.class); if (method != null) method.invoke( getFacesContext().getELContext(), new Object[] { _editedSubuser }); } That works fine as long as I provide an existing bean name (myBackingBean), but if I use backingBean the invoke() doesn't work due to the following error: javax.el.PropertyNotFoundException: Target Unreachable, identifier 'backingBean' resolved to null Is there a way I can retrieve from the facelet backing bean the value of a parameter that has been passed to the facelet? In my case, the value of backingBean, which should be myBackingBean? I've searched for and tried different solutions, but with no luck yet.

    Read the article

  • Splitting 25mb .txt file into smaller files using text delimiter

    - by user574141
    Regards, SO I am new to python and Perl. I have been trying to solve a simple problem and getting tied in knots with syntax. I hope someone has the time and patience to help. I have a 25mb file in ".txt" format which contains news-wire articles going back to 1970. Each news story is concatenated to the next, with only the "Copyright" statement to delimit. Each news story starts with "Item XX of XXX DOCUMENTS". There are certain metadata that are repeated throughout, I will use these for tagging later on. I wish to split this 25mb file into separate .txt files, each containing one news story (i.e. the text between "DOCUMENTS" and "Copyright", saving each with a different name (obviously). I am trying to 1 ) open the file... 2) iterate over lines in the file checking for the eof delimiter, and if it is not present writing the line to a list 3)write that list to a seperate small file. I'm having big problems with changing filenames using the counter, and how do I make Python start from where I left off, is the "seek" function appropriate? so far I have been trying this approach, completely unsuccessfully: myfile = open ("myfile.txt", 'r') filenumber = 0 for line in myfile.readline(): filenumber += 1 w=0 while myfile.readline() != '\s+DOCUMENTS\s*\n' ### read my line into a list mysmallfile()['w'] = [myfile.readline()] w += 1 output = open('C:\\Users\\dunner7\\Documents\###how do I change the filename each iteration???', 'w') output.writelines(mysmallfile) ###go back to start. Thank you for your time and patience. RD

    Read the article

  • use Ghostscript to convert pcl to postscript

    - by Bryon
    So I want to use Ghostscript to convert files that are created in pcl format to postscript. That's the gist of my problem. I am simply trying to run it on the command line, but in the final stage it will have to be run on a lp command like lp -d < gs something something GPL Ghostscript 9.00 (2010-09-14) I will be running this on a solaris 10 server but I believe any unix system should work similar. bash-3.00# /usr/local/bin/gs -sDEVICE=pswrite -dLanguageLevel=1 -dNOPAUSE -dBATCH -dSAFER -sOutputFile=output.ps cms-form.pcl GPL Ghostscript 9.00 (2010-09-14) Copyright (C) 2010 Artifex Software, Inc. All rights reserved. This software comes with NO WARRANTY: see the file PUBLIC for details. Error: /undefined in &k2G-210z100u0l6d0e63fa0V Operand stack: Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1910 1 3 %oparray_pop 1909 1 3 %oparray_pop 1893 1 3 %oparray_pop 1787 1 3 %oparray_pop --nostringval-- %errorexec_pop .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- Dictionary stack: --dict:1154/1684(ro)(G)-- --dict:0/20(G)-- --dict:77/200(L)-- Current allocation mode is local Current file position is 30 GPL Ghostscript 9.00: Unrecoverable error, exit code 1

    Read the article

  • How would you start automating my job?

    - by Jurily
    At my new job, we sell imported stuff. In order to be able to sell said stuff, currently the following things need to happen for every incoming shipment: Invoice arrives, in the form of an email attachment, Excel spreadsheet Monkey opens invoice, copy-pastes the relevant part of three columns into the relevant parts of a spreadsheet template, where extremely complex calculations happen, like =B2*550 Monkey sends this new spreadsheet to boss (email if lucky, printer otherwise), who sets the retail price Monkey opens the reply, then proceeds to input the data into the production database using a client program that is unusable on so many levels it's not even worth detailing Monkey fires up HyperTerminal, types in "AT", disconnect Monkey sends text messages and emails to customers using another part of the horrible client program, one at a time I want to change Monkey from myself to software wherever possible. I've never written anything that interfaces with email, Excel, databases or SMS before, but I'd be more than happy to learn if it saves me from this. Here's my uneducated wishlist: Monkey asks Thunderbird (mail server perhaps?) for the attachment Monkey tells Excel to dump the spreadsheet into a more Jurily-friendly format, like CSV or something Monkey parses the output, does the complex calculations // TODO: find a way to get the boss-generated prices with minimal manual labor involved Monkey connects to the database, inserts data Monkey spams costumers Is all this feasible? If yes, where do I start reading? How would you improve it? What language/framework do you think would be ideal for this? What would you do about the boss?

    Read the article

  • prolog recursion

    - by AhmadAssaf
    am making a function that will send me a list of all possible elemnts .. in each iteration its giving me the last answer .. but after the recursion am only getting the last answer back .. how can i make it give back every single answer .. thank you the problem is that am trying to find all possible distributions for a list into other lists .. the code test :- bp(3,12,[7, 3, 5, 4, 6, 4, 5, 2], Answer), format("Answer = ~w\n",[Answer]). bp(NB,C,OL,A):- addIn(C,OL,[[],[],[]],A); bp(NB,C,_,A). addIn(_,[],Result,Result). addIn(C,[Element|Rest],[F|R],Result):- member( Members , [F|R]), sumlist( Members, Sum), sumlist([Element],ElementLength), Cap is Sum + ElementLength, (Cap =< C, append([Element], Members,New), insert( Members, New, [F|R], PartialResult), addIn(C,Rest,PartialResult,Result)). by calling test .. am getting back all the list of possible answers .. now if i tried to do something that will fail like bp(3,11,[8,2,4,6,1,8,4],Answer). it will just enter a while loop .. more over if i changed the bp(NB,C,OL,A):- addIn(C,OL,[[],[],[]],A); bp(NB,C,_,A). to and instead of Or .. i get error : ERROR: is/2: Arguments are not sufficiently instantiated appreciate the help .. Thanks alot @hardmath

    Read the article

  • PHP/MySQL time zone migration

    - by El Yobo
    I have an application that currently stores timestamps in MySQL DATETIME and TIMESTAMP values. However, the application needs to be able to accept data from users in multiple time zones and show the timestamps in the time zone of other users. As such, this is how I plan to amend the application; I would appreciate any suggestions to improve the approach. Database modifications All TIMESTAMPs will be converted to DATETIME values; this is to ensure consistency in approach and to avoid having MySQL try to do clever things and convert time zones (I want to keep the conversion in PHP, as it involves less modification to the application, and will be more portable when I eventually manage to escape from MySQL). All DATETIME values will be adjusted to convert them to UTC time (currently all in Australian EST) Query modifications All usage of NOW() to be replaced with UTC_TIMESTAMP() in queries, triggers, functions, etc. Application modifications The application must store the time zone and preferred date format (e.g. US vs the rest of the world) All timestamps will be converted according to the user settings before being displayed All input timestamps will be converted to UTC according to the user settings before being input Additional notes Converting formats will be done at the application level for several main reasons The approach to converting time zones varies from DB to DB, so handing it there will be non-portable (and I really hope to be migrating away from MySQL some time in the not-to-distant future). MySQL TIMESTAMPs have limited ranges to the permitted dates (~1970 to ~2038) MySQL TIMESTAMPs have other undesirable attributes, including bizarre auto-update behaviour (if not carefully disabled) and sensitivity to the server zone settings (and I suspect I might screw these up when I migrate to Amazon later in the year). Is there anything that I'm missing here, or does anyone have better suggestions for the approach?

    Read the article

  • How do I create a hyperlink in java?

    - by Justin984
    I'm going through the google app engine tutorials at https://developers.google.com/appengine/docs/java/gettingstarted/usingusers I'm very new to google app engine, java and web programming in general. So my question is, at the bottom of the page it says to add a link to allow the user to log out. So far I've got this: public void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException { UserService userService = UserServiceFactory.getUserService(); User user = userService.getCurrentUser(); if(user != null){ resp.setContentType("text/plain"); resp.getWriter().println("Hello, " + user.getNickname()); String logoutLink = String.format("<a href=\"%s\">Click here to log out.</a>", userService.createLogoutURL(req.getRequestURI())); resp.getWriter().println(logoutLink); }else { resp.sendRedirect(userService.createLoginURL(req.getRequestURI())); } } However instead of a link, the full string is printed to the screen including the tags. When I look at the page source, I have no tags or any of the other stuff that goes with a webpage. I guess that makes sense considering I've done nothing to output any of that. Do I just do a bunch of resp.GetWriter().println() statements to output the rest of the webpage, or is there something else I don't know about? Thanks!

    Read the article

  • Executing system command in php, differs in using broswer and in using command line

    - by Amit
    Hi, I have to execute a Linux "more" command in php from a particular offset, format the result and display the result in Browser. My Code for the above is : <html> <head> <META HTTP-EQUIV=REFRESH CONTENT=10> <META HTTP-EQUIV=PRAGMA CONTENT=NO-CACHE> <title>Runtime Access log</title> </head> <body> <?php $moreCommand = "more +3693 /var/log/apache2/access_log | grep -v -e '.jpg' -e '.jpeg' -e '.css' -e '.js' -e '.bmp' -e '.ico'| wc -l"; exec($moreCommand, $accessDisplay); echo "<br/>No of lines are : $accessDisplay[0] <br/>"; ?> The output at the browser is :: No of lines are : 3428 (This is wrong) While executing the same command using command line gives a different output. My code snippet for the same is : <?php $moreCommand = "more +3693 /var/log/apache2/access_log | grep -v -e '.jpg' -e '.jpeg' -e '.css' -e '.js' -e '.bmp' -e '.ico'| wc -l"; exec($moreCommand, $accessDisplay); echo "No of lines are : $accessDisplay[0] \n"; ? The output at the command line is :: No of lines are : 279 (This is correct) While executing the same command directly in command line, gives me output as 279. I am unable to understand why the output of the same command is wrong in the browser. Its actually giving the word count of lines, ignoring the offset parameter. Please help !! Thanks, Amit

    Read the article

  • Dynamically add data stored in php to nested json

    - by HoGo
    I am trying to dynamicaly generate data in json for jQuery gantt chart. I know PHP but am totally green with JavaScript. I have read dozen of solutions on how dynamicaly add data to json, and tried few dozens of combinations and nothing. Here is the json format: var data = [{ name: "Sprint 0", desc: "Analysis", values: [{ from: "/Date(1320192000000)/", to: "/Date(1322401600000)/", label: "Requirement Gathering", customClass: "ganttRed" }] },{ name: " ", desc: "Scoping", values: [{ from: "/Date(1322611200000)/", to: "/Date(1323302400000)/", label: "Scoping", customClass: "ganttRed" }] }, <!-- Somoe more data--> }]; now I have all data in php db result. Here it goes: $rows=$db->fetchAllRows($result); $rowsNum=count($rows); And this is how I wanted to create json out of it: var data=''; <?php foreach ($rows as $row){ ?> data['name']="<?php echo $row['name'];?>"; data['desc']="<?php echo $row['desc'];?>"; data['values'] = {"from" : "/Date(<?php echo $row['from'];?>)/", "to" : "/Date(<?php echo $row['to'];?>)/", "label" : "<?php echo $row['label'];?>", "customClass" : "ganttOrange"}; } However this does not work. I have tried without loop and replacing php variables with plain text just to check, but it did not work either. Displays chart without added items. If I add new item by adding it to the list of values, it works. So there is no problem with the Gantt itself or paths. Based on all above I assume the problem is with adding plain data to json. Can anyone please help me to fix it?

    Read the article

  • Linq and returning types

    - by cdotlister
    My GUI is calling a service project that does some linq work, and returns data to my GUI. However, I am battling with the return type of the method. After some reading, I have this as my method: public static IEnumerable GetDetailedAccounts() { IEnumerable accounts = (from a in Db.accounts join i in Db.financial_institution on a.financial_institution.financial_institution_id equals i.financial_institution_id join acct in Db.z_account_type on a.z_account_type.account_type_id equals acct.account_type_id orderby i.name select new {account_id = a.account_id, name = i.name, description = acct.description}); return accounts; } However, my caller is battling a bit. I think I am screwing up the return type, or not handling the caller well, but it's not working as I'd hoped. This is how I am attempting to call the method from my GUI. IEnumerable accounts = Data.AccountService.GetDetailedAccounts(); Console.ForegroundColor = ConsoleColor.Green; Console.WriteLine("Accounts:"); Console.ForegroundColor = ConsoleColor.White; foreach (var acc in accounts) { Console.WriteLine(string.Format("{0:00} {1}", acc.account_id, acc.name + " " + acc.description)); } int accountid = WaitForKey(); However, my foreach, and the acc - isn't working. acc doesn't know about the name, description and id that I setup in the method. Am I at least close to being right?

    Read the article

  • How do I do client-side form validation in jQuery?

    - by marcamillion
    I am trying to use this plugin: http://docs.jquery.com/Plugins/Validation Where #user_new is the id for my form, this is what my code looks like: $('#user_new').validate({ rules: { user[username]: "required", user[email]: { required: true, email: true } }, messages: { user[username]: "Please specify your name", user[email]: { required: "We need your email address to contact you", email: "Your email address must be in the format of [email protected]" } } }) Where these are how my input fields look when the page is rendered (generated by Rails): <input class="clearField curved" id="user_f_name" name="user[f_name]" size="30" type="text" value="First Name" /><br /> <input class="clearField curved" id="user_l_name" name="user[l_name]" size="30" type="text" value="Last Name" /><br /> <input class="clearField curved" id="user_username" name="user[username]" size="30" type="text" value="Username" /><br /> <input class="clearField curved" id="user_password" name="user[password]" size="30" type="password" value="Password" /><br /> <input class="clearField curved" id="user_password_confirmation" name="user[password_confirmation]" size="30" type="password" value="Password" /><br /> <input class="clearField curved" id="user_email" name="user[email]" size="30" type="text" value="Email Address" /><br /> I was just trying to validate username & email first. Then take it from there. For the life of me, I can't figure out how to specify the syntax and the rules for working with this plugin. Help!

    Read the article

  • output query in strict table formate in code-igniter

    - by riad
    Dear all, my code is below.it show the output in table format having no problems. But when the particular tr gets long output from database then the table break. Now how can i fixed the tr width strictly?let say i want each td cannot be more than 100px. How can i do it? Note: Here table means html table,not the database table. if ($query->num_rows() > 0) { $output = ''; foreach ($query-result() as $function_info) { if ($description) { $output .= ''.$function_info-songName.''; $output .= ''.$function_info-albumName.''; $output .= ''.$function_info-artistName.''; $output .= ''.$function_info-Code1.''; $output .= ''.$function_info-Code2.''; $output .= ''.$function_info-Code3.''; $output .= ''.$function_info-Code4.''; $output .= ''.$function_info-Code5.''; } else { $output .= ''.$function_info-songName.''; } } $output .= ''; return $output; } else { return 'Result not found.'; } thanks riad

    Read the article

  • Different approaches for finding users within Active Directory

    - by EvilDr
    I'm a newbie to AD programming, but after a couple of weeks of research have found the following three ways to search for users in Active Directory using the account name as the search parameter: Option 1 - FindByIdentity Dim ctx As New PrincipalContext(ContextType.Domain, Environment.MachineName) Dim u As UserPrincipal = UserPrincipal.FindByIdentity(ctx, IdentityType.SamAccountName, "MYDOMAIN\Administrator") If u Is Nothing Then Trace.Warn("No user found.") Else Trace.Warn("Name=" & u.Name) Trace.Warn("DisplayName=" & u.DisplayName) Trace.Warn("DistinguishedName=" & u.DistinguishedName) Trace.Warn("EmployeeId=" & u.EmployeeId) Trace.Warn("EmailAddress=" & u.EmailAddress) End If Option 2 - DirectorySearcher Dim connPath As String = "LDAP://" & Environment.MachineName Dim de As New DirectoryEntry(connPath) Dim ds As New DirectorySearcher(de) ds.Filter = String.Format("(&(objectClass=user)(anr={0}))", Split(User.Identity.Name, "\")(1)) ds.PropertiesToLoad.Add("name") ds.PropertiesToLoad.Add("displayName") ds.PropertiesToLoad.Add("distinguishedName") ds.PropertiesToLoad.Add("employeeId") ds.PropertiesToLoad.Add("mail") Dim src As SearchResult = ds.FindOne() If src Is Nothing Then Trace.Warn("No user found.") Else For Each propertyKey As String In src.Properties.PropertyNames Dim valueCollection As ResultPropertyValueCollection = src.Properties(propertyKey) For Each propertyValue As Object In valueCollection Trace.Warn(propertyKey & "=" & propertyValue.ToString) Next Next End If Option 3 - PrincipalSearcher Dim ctx2 As New PrincipalContext(ContextType.Domain, Environment.MachineName) Dim sp As New UserPrincipal(ctx2) sp.SamAccountName = "MYDOMAIN\Administrator" Dim s As New PrincipalSearcher s.QueryFilter = sp Dim p2 As UserPrincipal = s.FindOne() If p2 Is Nothing Then Trace.Warn("No user found.") Else Trace.Warn(p2.Name) Trace.Warn(p2.DisplayName) Trace.Warn(p2.DistinguishedName) Trace.Warn(p2.EmployeeId) Trace.Warn(p2.EmailAddress) End If All three of these methods return the same results, but I was wondering if any particular method is better or worse than the others? Option 1 or 3 seem to be the best as they provide strongly-typed property names, but I might be wrong? My overall objective is to find a single user within AD based on the user principal value passed via the web browser when using Windows Authentication on a site (e.g. "MYDOMAIN\MyUserAccountName")

    Read the article

  • mysql: storing arbitrary data

    - by Hailwood
    Background: I was asking a question on stack overflow regarding creating tables on the fly where this conversation ensued: This smells like a terrible idea! In fact, it smells just like this one. What in the world do you want to use this for? – deceze @deceze: very true, However, How else would you store the contents of these CSV files. They must be stored in mysql for indexing. The only solid fact about them is that they all have a mobile column with a standard format. The CSV can have an arbitrary amount of columns with an arbitrary amount of rows. They can (with no exaggeration) range from a single row, 35 column csv to an 80k row single column CSV. I am open to other ideas. – Hailwood There are many solutions for this, from attribute-value schemas to JSON storage and NoSQL storage. Open a new question about it. Whatever you do though, don't dynamically create tables! – deceze Question: So my question is, What would you say is the best way to store this data? Are you in agreement with deceze about not creating dynamic tables?

    Read the article

  • Is there a table of OpenGL extensions, versions, and hardware support somewhere?

    - by Thomas
    I'm looking for some resource that can help me decide what OpenGL version my game needs at minimum, and what features to support through extensions. Ideally, a table of the following format: 1.0 1.1 1.2 1.2.1 1.3 ... multitexture - ARB ARB core core texture_float - EXT EXT ARB ARB ... (Not sure about the values I put in, but you get the idea.) The extension specs themselves, at opengl.org, list the minimum OpenGL version they need, so that part is easy. However, many extensions have been accepted and became core standard in subsequent OpenGL versions, but it is very hard to find when that happened. The only way I could find is to compare the full OpenGL standards document for each version. On a related note, I would also very much like to know which extensions/features are supported by which hardware, to help me decide what features I can safely use in my game, and which ones I need to make optional. For example, a big honkin' table like this: MAX_TEXTURE_IMAGE_UNITS MAX_VERTEX_TEXTURE_IMAGE_UNITS ... GeForce 6xxx 8 4 GeForce 7xxx 16 8 ATi x300 8 4 ... (Again, I'm making the values up.) The table could list hardware limitations from glGet but also support for particular extensions, and limitations of such extension support (e.g. what floating-point texture formats are supported in hardware). Any pointers to these or similar resources would be hugely appreciated!

    Read the article

  • Some images fails to load on Windows Server 2008

    - by Guffa
    I have an application running on a Windows Server 2008, that is processing uploaded images. Currently it successfully processes about 8000 images per day, creating 11 different sizes of each image. The problem that I have is that sometimes the application fails to load some images, I get the error "System.Runtime.InteropServices.ExternalException: A generic error occurred in GDI+.". The upload only accepts files with a JPEG extension (jpg/jpeg/jpe) or with a JPEG MIME type, and from what I can tell those images are really JPEG images. If I look at the image file in windows explorer on the server, it can successfully extract the thumbnail from the file, but if I try to open it, I get the error message "This is not a valid bitmap file, or it's format is not currently supported." from Paint. If I copy the image to my own computer, running Windows 7, there is no problem opening the image. It works in Paint, Windows Photo Viewer, Adobe Bridge, and Photoshop. If I try to load the image using Image.FromStream the same way as in the application running on the server, it loads just fine. (I have copied the file back to the server, and it still doesn't work, so there is nothing in the copying process that changes it.) When I look at the image information in Bridge, I see that the images are created using Picasa 3.0, but other than that I can't see anything special about them. I haven't yet found anyone having the same problem, or any known problems like this with the Picasa application. Has anyone had any similar problem, or know if there is something special about images created using Picasa? Is there any image codec that needs installing on the server to handle all kinds of JPEG images?

    Read the article

  • object / class methods serialized as well?

    - by Mat90
    I know that data members are saved to disk but I was wondering whether object's/class' methods are saved in binary format as well? Because I found some contradictionary info, for example: Ivor Horton: "Class objects contain function members as well as data members, and all the members, both data and functions, have access specifiers; therefore, to record objects in an external file, the information written to the file must contain complete specifications of all the class structures involved." and: Are methods also serialized along with the data members in .NET? Thus: are method's assembly instructions (opcodes and operands) stored to disk as well? Just like a precompiled LIB or DLL? During the DOS ages I used assembly so now and then. As far as I remember from Delphi and the following site (answer by dan04): Are methods also serialized along with the data members in .NET? sizeof(<OBJECT or CLASS>) will give the size of all data members together (no methods/procedures). Also a nice C example is given there with data and members declared in one class/struct but at runtime these methods are separate procedures acting on a struct of data. However, I think that later class/object implementations like Pascal's VMT may be different in memory.

    Read the article

  • Parse large XML file w/ script or use BioPython API ?

    - by jeremy04
    Hey guys this is my first question on here. I'm trying to make a local copy of the UniprotKB in SQL. The UniprotKB is 2.1GB, and it comes in XML and a special text format used by SwissProt Here are my options: 1) Use a SAX parser (XML) - I chose Ruby, and Nokogiri. I started writing the parser, but my initial reaction: how would I map the XML schema to the SAX parser? 2) BioPython - I already have BioSQL/Biopython installed, which literally created my SQL schema for me, and I was able to successfully insert one SwissProt/Uniprot txt file into the database. I'm running it right now (crosses fingers) on the entire 2.1gb. Here is the code I'm running: from Bio import SeqIO from BioSQL import BioSeqDatabase from Bio import SwissProt server = BioSeqDatabase.open_database(driver = "MySQLdb", user = "root", passwd = "", host="localhost", db = "bioseqdb") db = server["uniprot"] iterator = SeqIO.parse(open("/path/to/uniprot_sprot.dat", "r"), "swiss") db.load(iterator) server.commit() Edit: it's now crashing because the transactions are getting locked (since the tables are Innodb) Error Number: 1205 Lock wait timeout exceeded; try restarting transaction. I'm using MySQL version: 5.1.43 Should I switch my database to Postgrelsql ?

    Read the article

  • Entity Framework DateTime update extremely slow

    - by Phyxion
    I have this situation currently with Entity Framework: using (TestEntities dataContext = DataContext) { UserSession session = dataContext.UserSessions.FirstOrDefault(userSession => userSession.Id == SessionId); if (session != null) { session.LastAvailableDate = DateTime.Now; dataContext.SaveChanges(); } } This is all working perfect, except for the fact that it is terribly slow compared to what I expect (14 calls per second, tested with 100 iterations). When I update this record manually through this command: dataContext.Database.ExecuteSqlCommand(String.Format("update UserSession set LastAvailableDate = '{0}' where Id = '{1}'", DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss.fffffff"), SessionId)); I get 55 calls per second, which is more than fast enough. However, when I don't update the session.LastAvailableDate but I update an integer (e.g. session.UserId) or string with Entity Framework, I get 50 calls per second, which is also more than fast enough. Only the datetime field is terrible slow. The difference of a factor 4 is unacceptable and I was wondering how I can improve this as I don't prefer using direct SQL when I can also use the Entity Framework. I'm using Entity Framework 4.3.1 (also tried 4.1).

    Read the article

< Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >