Search Results

Search found 2922 results on 117 pages for 'raw noob'.

Page 98/117 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Heuristic to identify if a series of 4 bytes chunks of data are integers or floats

    - by flint
    What's the best heuristic I can use to identify whether a chunk of X 4-bytes are integers or floats? A human can do this easily, but I wanted to do it programmatically. I realize that since every combination of bits will result in a valid integer and (almost?) all of them will also result in a valid float, there is no way to know for sure. But I still would like to identify the most likely candidate (which will virtually always be correct; or at least, a human can do it). For example, let's take a series of 4-bytes raw data and print them as integers first and then as floats: 1 1.4013e-45 10 1.4013e-44 44 6.16571e-44 5000 7.00649e-42 1024 1.43493e-42 0 0 0 0 -5 -nan 11 1.54143e-44 Obviously they will be integers. Now, another example: 1065353216 1 1084227584 5 1085276160 5.5 1068149391 1.33333 1083179008 4.5 1120403456 100 0 0 -1110651699 -0.1 1195593728 50000 These will obviously be floats. PS: I'm using C++ but you can answer in any language, pseudo code or just in english.

    Read the article

  • Basic Steps in reading Excel files into matlab

    - by user3693727
    >> [NUM,TXT,RAW]=xlsread('C:\Users\Lincoln Wachn\Google Drive\Summer time\Book1') ??? Error using ==> xlsread at 219 XLSREAD unable to open file C:\Users\Lincoln Wachn\Google Drive\Summer time\Book1. File C:\Users\Lincoln Wachn\Google Drive\Summer time\Book1.xls not found. This is the error that I have received when I try to read a simple Excel file into MATLAB. This is a snapshot of the spreadsheet I would like to load in. Could guide me the basic know-how to extract these data? I have looked through the other questions pertaining to reading Excel files into MATLAB, but I am still very confused. I ultimately wish to extract the file below for my project using the same method. The second image shows the data I have to extract which I could not do. Its file type seems to be different, it is comma separated values file which is not xls. Hence, I am also confuse about whether different file type prevents extraction of data. Thanks you for helping(:

    Read the article

  • SQL Server CLR stored procedures in data processing tasks - good or evil?

    - by Gart
    In short - is it a good design solution to implement most of the business logic in CLR stored procedures? I have read much about them recently but I can't figure out when they should be used, what are the best practices, are they good enough or not. For example, my business application needs to parse a large fixed-length text file, extract some numbers from each line in the file, according to these numbers apply some complex business rules (involving regex matching, pattern matching against data from many tables in the database and such), and as a result of this calculation update records in the database. There is also a GUI for the user to select the file, view the results, etc. This application seems to be a good candidate to implement the classic 3-tier architecture: the Data Layer, the Logic Layer, and the GUI layer. The Data Layer would access the database The Logic Layer would run as a WCF service and implement the business rules, interacting with the Data Layer The GUI Layer would be a means of communication between the Logic Layer and the User. Now, thinking of this design, I can see that most of the business rules may be implemented in a SQL CLR and stored in SQL Server. I might store all my raw data in the database, run the processing there, and get the results. I see some advantages and disadvantages of this solution: Pros: The business logic runs close to the data, meaning less network traffic. Process all data at once, possibly utilizing parallelizm and optimal execution plan. Cons: Scattering of the business logic: some part is here, some part is there. Questionable design solution, may encounter unknown problems. Difficult to implement a progress indicator for the processing task. I would like to hear all your opinions about SQL CLR. Does anybody use it in production? Are there any problems with such design? Is it a good thing?

    Read the article

  • Windows Programming: ID2D1Bitmap Interface - Getting the Bitmap Data

    - by LostInBrackets
    I've been writing my own library of functions to access some of the new Direct2D Windows libraries. In particular, I've been working on the ID2D1Bitmap interface. I wanted to write a function to return a pointer to the start of the bitmap data (for the editing of particular pixels, or custom encoding or whatever else I might wish for in the future). Unfortunately... problem ahead... I can't seem to find a way to get access to the raw pixel data from the ID2D1Bitmap Interface. Does anyone have an idea how to access this? One of my friends suggested drawing the bitmap to a surface and extracting the bitmap data from there. I don't know if this would work. It definitely seems inefficient and I wouldn't know which kind of surface to use. Any help is appreciated. (c++ in particular, but I assume the code won't be tooo different between languages) (I know I could just read in the data direct from the file, but I'm using the WIC decoders which means it could be in any number of indecipherable formats)

    Read the article

  • Problem with XML encoding of database contents with Latin characters

    - by user89691
    I have an ASP Access database that contains strings in various European languages. The database was populated prior by agents in the respective countries. It contains entries with accented etc characters as you would expect. If I open the database with MS Access these characters show up fine. For example the the German equivalent of "Open" shows as "Öffnen" (hopefully you can see an "O" with 2 dots above it!). I have ASP code that reads the database and returns records in XML. The text is passed to XMLEncode to construct the XML, but that only seems to deal with the 5 specials like "<", "&", etc. If I dump the XML the accented characters are unchanged. <English>Open</English> <German>Öffnen</German> If I look at the raw packets with Wireshark I see that the "Ö" byte is hex D6, which appears to be it's decimal Unicode and ISO 8859-1 value. The problem starts when I try to parse the XML in client-side JS. I get: "An invalid character was found in text content" from IE. FF and Chrome happily accept the XML without hiccup but the browser shows the "Ö" character as a diamond with a question mark inside. http://www.validome.org/xml/validate/ reports "encoding error." http://www.w3schools.com/dom/dom_validate.asp thinks it is fine. The XML is UTF-8 encoded. What do I need to do to have IE accept my XML without complaint? What do I need to do to have browsers display the stuff correctly?

    Read the article

  • AudioFileWriteBytes fails with error code -40

    - by alexbw
    I'm trying to write raw audio bytes to a file using AudioFileWriteBytes(). Here's what I'm doing: void writeSingleChannelRingBufferDataToFileAsSInt16(AudioFileID audioFileID, AudioConverterRef audioConverter, ringBuffer *rb, SInt16 *holdingBuffer) { // First, figure out which bits of audio we'll be // writing to file from the ring buffer UInt32 lastFreshSample = rb->lastWrittenIndex; OSStatus status; int numSamplesToWrite; UInt32 numBytesToWrite; if (lastFreshSample < rb->lastReadIndex) { numSamplesToWrite = kNumPointsInWave + lastFreshSample - rb->lastReadIndex - 1; } else { numSamplesToWrite = lastFreshSample - rb->lastReadIndex; } numBytesToWrite = numSamplesToWrite*sizeof(SInt16); Then we copy the audio data (stored as floats) to a holding buffer (SInt16) that will be written directly to the file. The copying looks funky because it's from a ring buffer. UInt32 buffLen = rb->sizeOfBuffer - 1; for (int i=0; i < numSamplesToWrite; ++i) { holdingBuffer[i] = rb->data[(i + rb->lastReadIndex) & buffLen]; } Okay, now we actually try to write the audio from the SInt16 buffer "holdingBuffer" to the audio file. The NSLog will spit out an error -40, but also claims that it's writing bytes. No data is written to file. status = AudioFileWriteBytes(audioFileID, NO, 0, &numBytesToWrite, &holdingBuffer); rb->lastReadIndex = lastFreshSample; NSLog(@"Error = %d, wrote %d bytes", status, numBytesToWrite); return;

    Read the article

  • emberjs on symfony2 dev enviroment dont work propertly

    - by rkmax
    I've builded a app with symfony2 the app expose an REST Api. now i build a simple client for consuming app.coffee - app.js App = Em.Application.create ready: -> @.entradas.load() Entrada: Em.Object.extend() entradas: Em.ArrayController.create content: [] load: -> url = 'http://localhost/api/1/entrada' me = @ $.ajax( url: url, method: 'GET', success: (data) -> me.set('content', []) for entrada in data.data.objects me.pushObject DBPlus.Entrada.create(entrada) ) MyBundle:Home:index.html.twig <script type="text/x-handlebars" src="{{ asset('js/templates/entradas.hbs') }}"></script> <script src="{{ asset('js/libs/jquery-1.7.2.min.js') }}"></script> <script src="{{ asset('js/libs/handlebars-1.0.0.beta.6.js') }}"></script> <script src="{{ asset('js/libs/ember-1.0.pre.min.js') }}"></script> <script src="{{ asset('js/app.js') }}"></script> the problem here is when i run on dev enviroment and link the template like <script type="text/x-handlebars" src="{{...}}"> the app dont work, nothing show but works fine over prod enviroment. he only way that works on dev enviroment is inline template MyBundle:Home:index.html.twig <script type="text/x-handlebars"> {% raw %} <ul class="entradas"> {{#each App.entradas}} <li class="entrada">{{nombre}}</li> {{/each}} </ul> {% endraw %} </script> can explain why this behavoir? Note: I disabled the debug profiler toolbar, and nothing

    Read the article

  • Rendering LaTeX on third-party websites

    - by A. Rex
    There are some sites on the web that render LaTeX into some more readable form, such as Wikipedia, some Wordpress blogs, and MathOverflow. They may use images, MathML, jsMath, or something like that. There are other sites on the web where LaTeX appears inline and is not rendered, such as the arXiv, various math forums, or my email. In fact, it is quite common to see an arXiv paper's abstract with raw LaTeX in it, e.g. this paper. Is there a plugin available for Firefox, or would it be possible to write one, that renders LaTeX within pages that do not provide a rendering mechanism themselves? Some notes: It may be impossible to render some of the code, because authors often copy-paste code directly from their source TeX files, which may contain things like "\cite{foo}" or undefined commands. These should be left alone. This question is a repost of a question from MathOverflow that was closed for not being related to math. I program a lot, but Javascript is not my specialty, so comments along the lines of "look at this library" are not particularly helpful to me (but may be to others).

    Read the article

  • Reset.css and then a Set.css

    - by Sixfoot Studio
    I have, for a while now been using a reset.css file to reset everything before I start laying out my html designs. The reset is great in that it allows one to better control attributes such as margins, padding, line-height etc for all browsers. In essence the flatliner of css files. Now to get the heart beating again, I need a "set.css" file. So what I have done is created an Html file with all the possible elements on the page to then go and set the padding, margins etc of the h1, h2, p, td etc. I need some help with this as I am not sure what the defaults normally are. I had a look at the Firefox default css file that's used to generate all these attributes on a raw html file but it doesn't cover all the scenarios I could come up with when developing a site. Here's an example of the set.html file (a work in progress) which can be used as a lorem ipsum filler to add to your first page in a cms and then to style with a "set.css" file http://www.sixfoot.co.za/labs/Html-Css/set.html I'd appreciate it if someone knows if something like a set.css file exists or if someone could tell me what the general padding and margins are in cases like this when you have reset the css. Cheers, James

    Read the article

  • django: can't adapt error when importing data from postgres database

    - by Oleg Tarasenko
    Hi, I'm having strange error with installing fixture from dumped data. I am using psycopg2, and django1.1.1 silver:probsbox oleg$ python manage.py loaddata /Users/oleg/probs.json Installing json fixture '/Users/oleg/probs' from '/Users/oleg/probs'. Problem installing fixture '/Users/oleg/probs.json': Traceback (most recent call last): File "/opt/local/lib/python2.5/site-packages/django/core/management/commands/loaddata.py", line 153, in handle obj.save() File "/opt/local/lib/python2.5/site-packages/django/core/serializers/base.py", line 163, in save models.Model.save_base(self.object, raw=True) File "/opt/local/lib/python2.5/site-packages/django/db/models/base.py", line 495, in save_base result = manager._insert(values, return_id=update_pk) File "/opt/local/lib/python2.5/site-packages/django/db/models/manager.py", line 177, in _insert return insert_query(self.model, values, **kwargs) File "/opt/local/lib/python2.5/site-packages/django/db/models/query.py", line 1087, in insert_query return query.execute_sql(return_id) File "/opt/local/lib/python2.5/site-packages/django/db/models/sql/subqueries.py", line 320, in execute_sql cursor = super(InsertQuery, self).execute_sql(None) File "/opt/local/lib/python2.5/site-packages/django/db/models/sql/query.py", line 2369, in execute_sql cursor.execute(sql, params) File "/opt/local/lib/python2.5/site-packages/django/db/backends/util.py", line 19, in execute return self.cursor.execute(sql, params) ProgrammingError: can't adapt First I've checked similar issues on internet. This one seemed to be very related: http://code.djangoproject.com/ticket/5996, as my data has many non ASCII symbols But actually I've checked my django installation and it's ok there Could you advice what is wrong

    Read the article

  • How to add legend to imshow() in matplotlib

    - by rankthefirst
    I am using matplotlib In plot() or bar(), we can easily put legend, if we add labels to them. but what if it is a contourf() or imshow() I know there is a colorbar() which can present the color range, but it is not satisfied. I want such a legend which have names(labels). For what I can think of is that, add labels to each element in the matrix, then ,try legend(), to see if it works, but how to add label to the element, like a value?? in my case, the raw data is like: 1,2,3,3,4 2,3,4,4,5 1,1,1,2,2 for example, 1 represents 'grass', 2 represents 'sand', 3 represents 'hill'... and so on. imshow() works perfectly with my case, but without the legend. my question is: Is there a function that can automatically add legend, for example, in my case, I just have to do like this: someFunction('grass','sand',...) If there isn't, how do I add labels to each value in the matrix. For example, label all the 1 in the matrix 'grass', labell all the 2 in the matrix 'sand'...and so on. Thank you!

    Read the article

  • Django1.1 model field value preprocessing before returning

    - by Satoru.Logic
    Hi, all. I have a model class like this: class Note(models.Model): author = models.ForeignKey(User, related_name='notes') content = NoteContentField(max_length=256) NoteContentField is a custom sub-class of CharField that override the to_python method in purpose of doing some twitter-text-conversion processing. class NoteContentField(models.CharField): __metaclass__ = models.SubfieldBase def to_python(self, value): value = super(NoteContentField, self).to_python(value) from ..utils import linkify return mark_safe(linkify(value)) However, this doesn't work. When I save a Note object like this: note = Note(author=request.use, content=form.cleaned_data['content']) note.save() The conversed value is saved into the database, which is not what I wanna see. What I'm trying to do is to save the raw content into the database, and only make the conversion when the content attribute is later accessed. Would you please tell me what's wrong with this? Thanks to Pierre and Daniel. I have figured out what's wrong. I thought the text-conversion code should be in either to_python or get_db_prep_value, and that's wrong. I should override both of them, make to_python do the conversion and get_db_prep_value return the unconversed value: from ..utils import linkify class NoteContentField(models.CharField): __metaclass__ = models.SubfieldBase def to_python(self, value): self._raw_value = super(NoteContentField, self).to_python(value) return mark_safe(linkify(self._raw_value)) def get_db_prep_value(self, value): return self._raw_value I wonder if there is a better way to implement this?

    Read the article

  • Shopify JSONP issue in ajaxAPI

    - by Aaron U
    I'm getting some odd response back from shopify ajaxapi for jsonp. If you cURL a Shopify ajax api location http://storename.domain.com/cart.json?callback=handler you will get a jsonp response. But something is breaking the same request in browsers. It appears to be related to compression? Here are some responses from each browser when attempting to call the jsonp as documented. Firefox: The page you are trying to view cannot be shown because it uses an invalid or unsupported form of compression. Internet Explorer: Internet Explorer cannot display the webpage Chrome/Safari/Webkit: Cannot decode raw data, or failed (chrome) Attempted use via jquery: $.getJSON('http://storename.domain.com/cart.json?callback=?', function(data) { ... }); // Results in a failed request, viewable network request panels of dev tools Here is some output from cURL including response headers: $ curl -i http://storename.domain.com/cart.json?callback=CALLBACK_FUNC HTTP/1.1 200 OK Server: nginx Date: Tue, 18 Dec 2012 13:48:29 GMT Content-Type: application/javascript; charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Status: 200 OK ETag: cachable:864076445587123764313132415008994143575 Cache-Control: max-age=0, private, must-revalidate X-Alternate-Cache-Key: cachable:11795444887523410552615529412743919200 X-Cache: hit, server X-Request-Id: a0c33a55230fe42bce79b462f6fe450d X-UA-Compatible: IE=Edge,chrome=1 Set-Cookie: _session_id=b6ace1d7b0dbedd37f7787d10e173131; path=/; HttpOnly X-Runtime: 0.033811 P3P: CP="NOI DSP COR NID ADMa OPTa OUR NOR" CALLBACK_FUNC({"token":null,"note":null,"attributes":{},"total_price":0,...}) Also related unanswered here: Shopify Ajax API JSONP supported? Thanks

    Read the article

  • Question about registering COM server and Add Reference to it in a C# project

    - by smwikipedia
    I build a COM server in raw C++, here is the procedure: (1) write an IDL file to define the interface and library. (2) use msidl.exe to compile the IDL file to necessary .h, .c, .tlb files. (3) implement the COM server in C++ and build a .dll file. (4) add the following registry entris: [HKEY_CLASSES_ROOT\RawComCarLib.ComCar.1\CurVer] @="RawComCarLib.ComCar.1" ;CLSID [HKEY_CLASSES_ROOT\CLSID{6CC26343-167B-4CF2-9EDF-99368A62E91C}] @="RawComCarLib.ComCar.1" [HKEY_CLASSES_ROOT\CLSID{6CC26343-167B-4CF2-9EDF-99368A62E91C}\InprocServer32] @="D:\com\Project01.dll" [HKEY_CLASSES_ROOT\CLSID{6CC26343-167B-4CF2-9EDF-99368A62E91C}\ProgID] @="RawComCarLib.ComCar.1" [HKEY_CLASSES_ROOT\CLSID{6CC26343-167B-4CF2-9EDF-99368A62E91C}\TypeLib] @="{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}" ;TypeLib [HKEY_CLASSES_ROOT\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}] [HKEY_CLASSES_ROOT\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}\1.0] @="Car Server Type Lib" [HKEY_CLASSES_ROOT\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}\1.0\0] [HKEY_CLASSES_ROOT\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}\1.0\0\win32] @="D:\com\Project01.tlb" [HKEY_CLASSES_ROOT\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}\1.0\FLAGS] @="0" [HKEY_LOCAL_MACHINE\SOFTWARE\Classes\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}\1.0\0\win32] @="C:\Windows\System32\msdatsrc.tlb" (5) I try to add reference to the COM by click the Add Reference in the C# project. (6) In the COM tab, I saw my "Car Server Type Lib", it's ok until now. I try to use the Object Browser to browse my COM lib, but the Visual Studio said "the following components could not be browsed", and I noticed that there's no new reference added to the list in the C# project Reference. I can use the tlbimp.exe to generate a interop.CarCom.dll, and then use the COM through this interop dll, but I want this interop assembly to be generated automatically when I just add reference to the COM. Could someone tell me what's wrong? Many thanks.

    Read the article

  • not a valid AllXsd value

    - by jun
    I got this from a Soap client request: Exception: SoapFault exception: [soap:Client] Server was unable to read request. --- There is an error in XML document (2, 273). --- The string '2010-5-24' is not a valid AllXsd value. in /path/filinet.php:21 Stack trace: #0 [internal function]: SoapClient-__call('SubIdDetailsByO...', Array) #1 /path/filinet.php(21): SoapClient-SubIdDetailsByOfferId(Array) #2 {main} Seems like I am sending an incorrect value, how do I format my value in an AllXsd in php? Here is my code: <?php $start = isset($_GET['start']) ? $_GET['start'] : date("Y-m-d"); $end = isset($_GET['end']) ? $_GET['end'] : date("Y-m-d"); //define parameter array $param = array('userName'=>'user', 'password'=>'pass', 'startDate' => $start, 'endDate' => $end, 'promotionId' => ''); //Get wsdl path $serverPath = "https://webservices.filinet.com/affiliate/reports.asmx?WSDL"; //Declare Soap client $client = new SoapClient($serverPath); try { //make the call $result = $client->SubIdDetailsByOfferId($param); //If error found display error if(isset($fault)) { echo "Error: ". $fault; } //If no error display response else { //Used to display raw XML in the Web Browser header("Content-Type: text/xml;"); //SubIdDetailsResult = XML results echo $result->SubIdDetailsByOfferIdResult; } } catch(SoapFault $ex) { echo "<b>Exception:</b> ". $ex; } unset($client); ?>

    Read the article

  • Play and Stop in One button

    - by Ardi
    i'm newbie, i'm tried to make play audio play and stop for 1 button only, but i'm in trouble now. if i touch a button when audio is playing, it doesn't stop, even playing audio again and make a double sound. here's my code public class ProjectisengActivity extends Activity{ ImageButton mainkan; MediaPlayer mp; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.test2); mainkan=(ImageButton)findViewById(R.id.imageButton1); mainkan.setOnClickListener(new OnClickListener(){ @Override public void onClick(View v){ go(); } }); public void go(){ mp=MediaPlayer.create(ProjectisengActivity.this, R.raw.test); if(mp.isPlaying()){ mp.stop(); try { mp.prepare(); } catch (IllegalStateException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } mp.seekTo(0); } else { mp.start(); } i'm create for android 3.0 (HoneyComb) thanks for help

    Read the article

  • Securely erasing a file using simple methods?

    - by Jason
    Hello, I am using C# .NET Framework 2.0. I have a question relating to file shredding. My target operating systems are Windows 7, Windows Vista, and Windows XP. Possibly Windows Server 2003 or 2008 but I'm guessing they should be the same as the first three. My goal is to securely erase a file. I don't believe using File.Delete is secure at all. I read somewhere that the operating system simply marks the raw hard-disk data for deletion when you delete a file - the data is not erased at all. That's why there exists so many working methods to recover supposedly "deleted" files. I also read, that's why it's much more useful to overwrite the file, because then the data on disk actually has to be changed. Is this true? Is this generally what's needed? If so, I believe I can simply write the file full of 1's and 0's a few times. I've read: http://www.codeproject.com/KB/files/NShred.aspx http://blogs.computerworld.com/node/5756 http://blogs.computerworld.com/node/5687 http://stackoverflow.com/questions/4147775/securely-deleting-a-file-in-c-net

    Read the article

  • Ruby on Rails: How to sanitize a string for SQL when not using find and other built-in methods?

    - by williamjones
    I'm trying to sanitize a string that involves user input without having to resort to manually crafting my own possibly buggy regex if possible. There are a number of methods in Rails that can allow you to enter in native SQL commands, how do people escape user input for those? The question I'm asking is a broad one, but in my particular case, I'm working with a column in my Postgres database that Rails does not natively understand as far as I know, the tsvector, which holds plain text search information. Rails is able to write and read from it as if it's a string, however, unlike a string, it doesn't seem to be automatically escaping it when I do things like vector= inside the model. For example, when I do model.name='::', where name is a string, it works fine. When I do model.vector='::' it errors out: ActiveRecord::StatementInvalid: PGError: ERROR: syntax error in tsvector: "::" "vectors" = E'::' WHERE "id" = 1 This seems to be a problem caused by lack of escaping of the semicolons, and I can manually set the vector='\:\:' fine. I also had the bright idea, maybe I can just call something like: ActiveRecord::Base.connection.execute "UPDATE medias SET vectors = ? WHERE id = 1", "::" However, this syntax doesn't work, because the raw SQL commands don't have access to find's method of escaping and inputting strings by using the ? mark. This strikes me as the same problem as calling connection.execute with any type of user input, as it all boils down to sanitizing the strings, but I can't seem to find any way to manually call Rails' SQL string sanitization methods. Can anyone provide any advice?

    Read the article

  • Better viewing of postfix mail queue files than postcat?

    - by Geekman
    So I got a call early this morning about a client needing to see what email they have waiting to be delivered sitting in our secondary mail server. Their link for the main server had (still is) been down for two days and they needed to see their email. So I wrote up a quick perl script to use mailq in combination with postcat to dump each email for their address into separate files, tar'd it up and sent it off. Horrible code, I know, but it was urgent. My solution works OK in that it at least gives a raw view, but I thought tonight it would be nice if I had a solution where I could provide their email attachments and maybe remove some "garbage" header text as well. Most of the important emails seem to have a PDF or similar attached. I've been looking around but the only method of viewing queue files I can see is the postcat command, and I really don't want to write my own parser - so I was wondering if any of you have already done so, or know of a better command to use? Here's the code for my current solution: #!/usr/bin/perl $qCmd="mailq | grep -B 2 \"someemailaddress@isp\" | cut -d \" \" -f 1"; @data = split(/\n/, `$qCmd`); $i = 0; foreach $line (@data) { $i++; $remainder = $i % 2; if ($remainder == 0) { next; } if ($line =~ /\(/ || $line =~ /\n/ || $line eq "") { next; } print "Processing: " . $line . "\n"; `postcat -q $line > $line.email.txt`; $subject=`cat $line.email.txt | grep "Subject:"`; #print "SUB" . $subject; #`cat $line.email.txt > \"$subject.$line.email.txt\"`; } Any advice appreciated.

    Read the article

  • Getter and Setter vs. Builder strategy

    - by Extrakun
    I was reading a JavaWorld's article on Getter and Setter where the basic premise is that getters expose internal content of an object, hence tightening coupling, and go on to provide examples using builder objects. I was rather leery of abolishing getter/setter but on second reading of the article, see to quite like the idea. However, sometimes I just need one cruical element of an entity class, such as the user's id and writing one whole class just to extract that cruical element seems like overkill. It also implies that for different view, a different type of importer/exporter must be implemented (or the whole data of the class to be exported out, thus resulting in waste). Usually I tend towards filtering the result of a getter - for example, if I need to output the price of a product in different currency, I would code it as: return CurrencyOutput::convertTo($product->price(), 'USD'); This is with the understanding that the raw output of a getter is not necessary the final result to be pushed onto a screen or a database. Is getter/setter really as bad as it is protrayed to be? When should one adopt a builder strategy, or a 'get the result and filter it' approach? How do you avoid having a class needing to know about every other objects if you are not using getter/setter?

    Read the article

  • Looking for streaming xml pretty printer in C/C++ using expat or libxml2

    - by Mark Zeren
    I'm looking for a streaming xml pretty printer for C/C++ that's either self contained or that uses libxml2 or expat. I've searched a bit and not found one. It seems like something that would be generally useful. Am I missing an obvious tool that does this? Background: I have a library that outputs xml without whitespace all on one line. In some cases I'd like to pretty print that output. I'm looking for a BSD-ish licensed C/C++ library or sample code that will take a raw xml byte stream and pretty print it. Here's some pseudo code showing one way that I might use this functionality: void my_write(const char* buf, int len); PrettyPrinter pp(bind(&my_write)); while (...) { // ... get some more xml ... const char* buf = xmlSource.get_buf(); int len = xmlSource.get_buf_len(); int written = pp.write(buf, len); // calls my_write with pretty printed xml // ... error handling, maybe call write again, etc. ... } I'd like to avoid instantiating a DOM representation. I already have dependencies on the expat and libxml2 shared libraries, and I'd rather not add any more shared library dependencies.

    Read the article

  • .NET regex: Match.nextMatch() never returns

    - by Jimmy
    I have a regex that seems to have worked fine for the past year or so, and all of a sudden today with a new slightly different text to match against, Match.nextMatch() never returns. I'm no regex expert and I'm sure the regex can be optimized, but previous data sets weren't much more complex than what I've tried today. Furthermore, the regex works fine against the offending data set in a tool like RegexBuddy; it's only in .net (running in debug in Visual Studio) that it seems to hang. Nevertheless, if anyone can figure out how to tweak the regex to make it work, I'd really appreciate it. This is the regex: <tr>(<td[^>]*><a[^>]*>(?<callOptionTicker>[A-Z]{1,5}\d{6}C\d{8})</a></td>)(<td[^>]*>.*?</td>){6}(<td[^>]*><b><a[^>]*>(?<strikePrice>\d*\.\d*)</a></b></td>)(<td[^>]*><a[^>]*>(?<putOptionTicker>[A-Z]{1,5}\d{6}P\d{8})</a></td>) It's meant to extract put and call option tickers from a Yahoo option chain page (i.e., raw HTML). It works fine for IBM http://finance.yahoo.com/q/os?s=IBM&m=2010-05-21 It doesn't work for SPX options (this is the offending data set) http://finance.yahoo.com/q/os?s=I:SPX.W&m=2010-05

    Read the article

  • ETL , Esper or Drools?

    - by geoaxis
    Hello, The question environment relates to JavaEE, Spring I am developing a system which can start and stop arbitrary TCP (or other) listeners for incoming messages. There could be a need to authenticate these messages. These messages need to be parsed and stored in some other entities. These entities model which fields they store. So for example if I have property1 that can have two text fields FillLevel1 and FillLevel2, I could receive messages on TCP which have both fill levels specified in text as F1=100;F2=90 Later I could add another filed say FillLevel3 when I start receiving messages F1=xx;F2=xx;F3=xx. But this is a conscious decision on the part of system modeler. My question is what do you think is better to use for parsing and storing the message. ETL (using Pantaho, which is used in other system) where you store the raw message and use task executor to consume them one by one and store the transformed messages as per your rules. One could use Espr or Drools to do the same thing , storing rules and executing them with timer, but I am not sure how dynamic you could get with making rules (they have to be made by end user in a running system and preferably in most user friendly way, ie no scripts or code, only GUI) The end user should be capable of changing the parse rules. It is also possible that end user might want to change the archived data as well (for example in the above example if a new value of FillLevel is added, one would like to put a FillLevel=-99 in the previous values to make the data consistent). Please ask for explanations, I have the feeling that I need to revise this question a bit. Thanks

    Read the article

  • How to accurately parse smtp message status code (DSN)?

    - by Geo
    RFC1893 claims that status codes will come in the format below you can read more here. But our bounce management system is having a hard time parsing error status code from bounce messages. We are able to get the raw message, but depending on the email server the code will come in different places. Is there any rule on how to parse this type of messages to obtain better results. We are not looking for the 100% solution but at least 80%. This document defines a new set of status codes to report mail system conditions. These status codes are intended to be used for media and language independent status reporting. They are not intended for system specific diagnostics. The syntax of the new status codes is defined as: status-code = class "." subject "." detail class = "2"/"4"/"5" subject = 1*3digit detail = 1*3digit White-space characters and comments are NOT allowed within a status- code. Each numeric sub-code within the status-code MUST be expressed without leading zero digits. The quote above from the RFC tells one thing but then the text below from a leading tool on bounce management says something different, where I can get a good source of standard status codes: Return Code Description 0 UNDETERMINED - (ie. Recipient Reply) 10 HARD BOUNCE - (ie. User Unknown) 20 SOFT BOUNCE - General 21 SOFT BOUNCE - Dns Failure 22 SOFT BOUNCE - Mailbox Full 23 SOFT BOUNCE - Message Too Large 30 BOUNCE - NO EMAIL ADDRESS. VERY RARE! 40 GENERAL BOUNCE 50 MAIL BLOCK - General 51 MAIL BLOCK - Known Spammer 52 MAIL BLOCK - Spam Detected 53 MAIL BLOCK - Attachment Detected 54 MAIL BLOCK - Relay Denied 60 AUTO REPLY - (ie. Out Of Office) 70 TRANSIENT BOUNCE 80 SUBSCRIBE Request 90 UNSUBSCRIBE/REMOVE Request 100 CHALLENGE-RESPONSE

    Read the article

  • How Can I Generate Equivalent Output Using the CryptoAPI and the .NET Encryption (TripleDESCryptoSer

    - by S. Butts
    I have some C#/.NET code that encrypts and decrypts data using TripleDES encryption. It sticks to the sample code provided at MSDN pretty closely. The encryption piece looks like the following: TripleDESCryptoServiceProvider _desProvider = new TripleDESCryptoServiceProvider(); //bytes for key and initialization vector //keyBytes is 24 bytes of stuff, vectorBytes is 8 bytes of stuff byte[] keyBytes; byte[] vectorBytes; FileStream fStream = File.Open(locationOfFile, FileMode.Create, FileAccess.Write); CryptoStream cStream = new CryptoStream(fStream, _desProvider.CreateEncryptor(keyBytes, vectorBytes), CryptoStreamMode.Write); BinaryWriter bWriter = new BinaryWriter(cStream); //write out encrypted data //raw data is a few bytes of binary information byte[] rawData; bWriter.Write(rawData); With encrypting and decrypting in C#, this all works like a charm. The problem is I need to write a small Win32 utility that will duplicate the encryption above. I have tried several methods using the CryptoAPI, and I simply do not get output that the .NET piece can decrypt, no matter what I do. Can someone please tell me what the equivalent C++ code is that will produce the same output? I am not certain just what methods of the CryptoAPI the .NET functions use to encrypt the data. What options are used, and what method of generating the key is used? Before someone suggests that I just write it in C# anyway, or create some common library bridge for them, those options are unfortunately off the table. It really has to work in Win32 with .NET and without using a DLL. I have some leeway in changing the C# code. I apologize in advance if this is bone-headed, as I am new to encryption.

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >