Search Results

Search found 5147 results on 206 pages for '3ds max'.

Page 168/206 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • Java split xml file

    - by CC
    Hi all, I'm working on a piece of code to split files. I want to split flat file (that's ok, it is working fine) and xml file. The idea is to split based of a number of files to split: I have a file, and I want to split it in x files (x is a parameters). I'm doing the split by taking the size of the file and spliting the size by the number of files to split. Then, mysolution was to use a BufferedReader and to use it like while ((n = reader.read(buffer, 0, buffer.length)) != -1) { { The main problem is that for the xml file I cannot just split it, but I have to split it based on a block delimited by a start xml tag and end xml tag: <start tag> bla bla xml stuff </end tag> So I cannot cut a block at the middle. So if when I'm at the half of a block, is the size of my new file is greater than my max, I will have to read until the end of the tag, and then, to start a next file. The problem is that I have all sort of cases, and is a bit difficult to search the end tag. - the block reads a text until the middle of the end tag - the block reads a text until the end of the end tag, and no more other caracter after - etc and in the same time to have a loop and read the next block. Some times the end of a block concatenated with the start of the next one, I have the end xml tag. I hope you get the idea. My question is, does anyone have some algorithm that does that more accurate and who i treating all special cases ? The idea is to split the file as quickly as possible. Thanks alot.

    Read the article

  • LINQ - How to query a range of effective dates that only has start dates

    - by itchi
    I'm using C# 3.5 and EntityFramework. I have a list of items in the database that contain interest rates. Unfortunately this list only contains the Effective Start Date. I need to query this list for all items within a range. However, I can't see a way to do this without querying the database twice. (Although I'm wondering if delayed execution with EntityFramework is making only one call.) Regardless, I'm wondering if I can do this without using my context twice. internal IQueryable<Interest> GetInterests(DateTime startDate, DateTime endDate) { var FirstDate = Context.All().Where(x => x.START_DATE < startDate).Max(x => x.START_DATE); IQueryable<Interest> listOfItems = Context.All().Where(x => x.START_DATE >= FirstDate && x.START_DATE <= endDate); return listOfItems; }

    Read the article

  • Figuring out the Nyquist performance limitation of an ADC on an example PIC microcontroller

    - by AKE
    I'm spec-ing the suitability of a dsPIC microcontroller for an analog-to-digital application. This would be preferable to using dedicated A/D chips and a separate dedicated DSP chip. To do that, I've had to run through some computations, pulling the relevant parameters from the datasheets. I'm not sure I've got it right -- would appreciate a check! (EDITED NOTE: The PIC10F220 in the example below was selected ONLY to walk through a simple example to check that I'm interpreting Tacq, Fosc, TAD, and divisor correctly in working through this sort of Nyquist analysis. The actual chips I'm considering for the design are the dsPIC33FJ128MC804 (with 16b A/D) or dsPIC30F3014 (with 12b A/D).) A simple example: PIC10F220 is the simplest possible PIC with an ADC Runs at clock speed of 8MHz. Has an instruction cycle of 0.5us (4 clock steps per instruction) So: Taking Tacq = 6.06 us (acquisition time for ADC, assume chip temp. = 50*C) [datasheet p34] Taking Fosc = 8MHz (? clock speed) Taking divisor = 4 (4 clock steps per CPU instruction) This gives TAD = 0.5us (TAD = 1/(Fosc/divisor) ) Conversion time is 13*TAD [datasheet p31] This gives conversion time 6.5us ADC duration is then 12.56 us [? Tacq + 13*TAD] Assuming at least 2 instructions for load/store: This is another 1 us [0.5 us per instruction] Which would give max sampling rate of 73.7 ksps (1/13.56) Supposing 8 more instructions for real-time processing: This is another 4 us Thus, total ADC/handling time = 17.56us (12.56us + 1us + 4us) So expected upper sampling rate is 56.9 ksps. Nyquist frequency for this sampling rate is therefore 28 kHz. If this is right, it suggests the (theoretical) performance suitability of this chip's A/D is for signals that are bandlimited to 28 kHz. Is this a correct interpretation of the information given in the data sheet in obtaining the Nyquist performance limit? Any opinions on the noise susceptibility of ADCs in PIC / dsPIC chips would be much appreciated! AKE

    Read the article

  • How do I cast <T> to varbinary and be still be able to perform a CONVERT on the sql side? Implicatio

    - by Biff MaGriff
    Hello, I'm writing this application that will allow a user to define custom quizzes and then allow another user to respond to the questions. Each question of the quiz has a corresponding datatype. All the responses to all the questions are stored vertically in my [Response] table. I currently use 2 fields to store the response. //Response schema ResponseID int QuizPersonID int QuestionID int ChoiceID int //maps to Choice table, used for drop down lists ChoiceValue varbinary(MAX) //used to store a user entered value I'm using .net 3.5 C# SQL Server 2008. I'm thinking that I would want to store different datatypes in the same field and then in my SQL report proc I would CONVERT to the proper datatype. I'm thinking this is ideal because I only have to check one field. I'm also thinking it might be more trouble than it is worth. I think my other options are to; store the data as strings in the db (yuck), or to have a column for each datatype I might use. So what I would like to know is, how would I format my datatypes in C# so that they can be converted properly in SQL? What is the performance hit for converting in SQL? Should I just make a whole wack of columns for each datatype?

    Read the article

  • Can pdflatex (or any tex package) automatically rescale included images which have been reduced in s

    - by drfrogsplat
    I'm writing my thesis in LaTeX, generating it with pdflatex. I have a large number of figures, many of which are bitmaps (as opposed to SVG) in PNG/JPEG format. I've generally created them to be fairly high resolution (say 1600x1200-ish) to ensure that whatever size they end up in the document, they'll be at least 300dpi when printed. As I'm writing/laying out the document, I'm including graphics (using \includegraphics from the graphicx package) and setting widths/heights as appropriate (e.g. subfigures are quite small). I don't need the images to be any more than about 300 dpi at best, so where I have shrunk a 1600x1200 image down to say 5cm, the image is now at 800 dpi. So despite including some very small (on the page) images, the PDF is becoming quite large. Is there a way to tell pdflatex or graphicx (or something else involved?) to convert all images to a maximum of 300 dpi, based on the dimensions I'm setting with say \includegraphics[width=2in]{filename}? i.e. so it scales the image to a max of 600x600 pixels as it includes it in the PDF (leaving the original file untouched). I know I can resize the original images with various command line applications, and include the pre-resized versions, but given the images vary in size considerably, it wouldn't be as simple as making sure they're all 300dpi for a constant printed size. It'd also be nice to be able to easily create different versions of PDFs (web vs final print) without resizing images manually, so that the 'web' PDF capped images at say 72-100 dpi while the final print one could cap at 600 (if at all).

    Read the article

  • Semantic Grid System, Media Query issue

    - by Andy
    I'm using the Semantic Grid System to build a responsive site. However, something isn't quite right with the media queries that should obviously kick in once it hits a particular screen size. I'll reference what i mean with their example on the website : if I view this on my iPhone for example, given that it is supposed to adjust to a single column structure on a mobile device, it still throws out the web version of the page. That is true for both Safari and Chrome on my iPhone. However, if I use the RWD bookmarklet to check it's appearance at different resolutions it appears as expected for the mobile resolution. Also, ironically, if I resize the page in Safari on my desktop it also adjusts accordingly once I get down to the approriate screen size, but not in Firefox. The media query that it uses once it hits 720px is @media screen and (max-width: 720px) { #maincolumn, #sidebar { .column(12); margin-bottom: 1em; } } and I might be wide of the mark here but I think that must be the issue. But given that this is directly from the semantic.gs website I'm not inclined to question their own code. Any idea what the problem might be?

    Read the article

  • Beginner SQL question: querying gold and silver tag badges in Stack Exchange Data Explorer

    - by polygenelubricants
    I'm using the Stack Exchange Data Explorer to learn SQL, but I think the fundamentals of the question is applicable to other databases. I'm trying to query the Badges table, which according to Stexdex (that's what I'm going to call it from now on) has the following schema: Badges Id UserId Name Date This works well for badges like [Epic] and [Legendary] which have unique names, but the silver and gold tag-specific badges seems to be mixed in together by having the same exact name. Here's an example query I wrote for [mysql] tag: SELECT UserId as [User Link], Date FROM Badges Where Name = 'mysql' Order By Date ASC The (slightly annotated) output is: as seen on stexdex: User Link Date --------------- ------------------- // all for silver except where noted Bill Karwin 2009-02-20 11:00:25 Quassnoi 2009-06-01 10:00:16 Greg 2009-10-22 10:00:25 Quassnoi 2009-10-31 10:00:24 // for gold Bill Karwin 2009-11-23 11:00:30 // for gold cletus 2010-01-01 11:00:23 OMG Ponies 2010-01-03 11:00:48 Pascal MARTIN 2010-02-17 11:00:29 Mark Byers 2010-04-07 10:00:35 Daniel Vassallo 2010-05-14 10:00:38 This is consistent with the current list of silver and gold earners at the moment of this writing, but to speak in more timeless terms, as of the end of May 2010 only 2 users have earned the gold [mysql] tag: Quassnoi and Bill Karwin, as evidenced in the above result by their names being the only ones that appear twice. So this is the way I understand it: The first time an Id appears (in chronological order) is for the silver badge The second time is for the gold Now, the above result mixes the silver and gold entries together. My questions are: Is this a typical design, or are there much friendlier schema/normalization/whatever you call it? In the current design, how would you query the silver and gold badges separately? GROUP BY Id and picking the min/max or first/second by the Date somehow? How can you write a query that lists all the silver badges first then all the gold badges next? Imagine also that the "real" query may be more complicated, i.e. not just listing by date. How would you write it so that it doesn't have too many repetition between the silver and gold subqueries? Is it perhaps more typical to do two totally separate queries instead? What is this idiom called? A row "partitioning" query to put them into "buckets" or something?

    Read the article

  • forward slash problem in xsl ams xsql

    - by Peter Kaleta
    Hi I have simple xsql <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="zad1.xsl" ?> <page xmlns:xsql="urn:oracle-xsql" connection="java:comp/env/jdbc/mondialDS"> <xsql:query max-rows="-1" null-indicator="no" tag-case="lower" rowset-element="continents"> select name as continent from mondial_user.Continent order by 1 </xsql:query> </page> which gives me a list of continents with "australia/oceania" among them i use XSL on above xsql : <?xml version="1.0" encoding="UTF-8" ?> <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <!-- Root template --> <res> <xsl:template match="/continents"> <xsl:for-each select="row"> <re> <xsl:value-of select="continent"/> </re> </xsl:for-each> </xsl:template> </res> </xsl:stylesheet> ANd firefox throws error : on wrong formated xml document with : AfricaAmericaAsiaAustralia/OceaniaEurope -----------------------------------^ Help apreciated

    Read the article

  • When sending headers to download a PDF, Safari appends .html

    - by alex
    Here is the request and response headers http://www.example.com/get/pdf GET /~get/pdf HTTP/1.1 Host: www.example.com User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://www.example.com Cookie: etc HTTP/1.1 200 OK Date: Thu, 29 Apr 2010 02:20:43 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8i DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 X-Powered-By: Me Expires: Thu, 19 Nov 1981 08:52:00 GMT Pragma: no-cache Cache-Control: private Content-Disposition: attachment; filename="File #1.pdf" Content-Length: 18776 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=utf-8 ---------------------------------------------------------- Basically, the response headers are sent by DOMPDF's stream() method. In Firefox, the file is prompted as File #1.pdf. However, in Safari, the file is saved as File #1.pdf.html. Does anyone know why Safari is appending the html extension to the filename?

    Read the article

  • Strange execution times in t-sql

    - by TonyP
    Hi All I have two stored procedures, the first one calls the second .. If I execute the second one alone it takes over 5 minutes to complete.. But when executed within the first one it takes little over 1 minute.. What is the reason ! Here is the first one ALTER procedure [dbo].[schRefreshPriceListItemGroups] as begin tran delete from PriceListItemGroups if @@error !=0 goto rolback Insert PriceListItemGroups(comno,t$cuno,t$cpls,t$cpgs,t$dsca,t$cpru) SELECT distinct c.comno,c.t$cuno, c.t$cpls,I.t$cpgs,g.t$dsca,g.t$cpru FROM TTCCOM010nnn C JOIN TTDSLS032nnn PL ON PL.comno = c.Comno and PL.t$cpls = c.t$cpls JOIN TTIITM001nnn I ON I.t$item = pl.t$item AND I.comno = pl.comNo JOIN TTCMCS024nnn G ON g.T$cprg = I.t$cpgs AND g.comno = I.Comno WHERE c.t$cpls !='' order by comno desc, t$cuno, t$cpgs if @@error !=0 goto rolback ----------------------------------------------------- Exec scrRefreshCustomersCatalogs ----------------------------------------------------- commit tran return rolback: Rollback tran And the second one Alter proc scrRefreshCustomersCatalogs as declare @baanIds table(id int identity(1,1),baanId varchar(12)) declare @baanId varchar(12),@i int, @n int Insert @baanIds(BaanId) select baanId from ftElBaanIds() SELECT @I=1,@n=max(id) from @baanIds select @i,@n Begin tran if @@error !=0 goto xRollBack WHILE @I <=@n Begin select @baanId=baanId from @baanIds where id=@i if @@error !=0 goto xRollBack Delete from customersCatalogs where comno+'-'+t$cuno=@baanId print Convert(varchar,@i)+' baanId='+@baanId Insert customersCatalogs exec customersCatalog @baanId if @@error !=0 goto xRollBack set @i=@i+1; end Commit Tran Update statistics customersCatalogs with fullscan Return xRollBack: Print '*****Rolling back*************' Rollback tran

    Read the article

  • PostgreSQL: BYTEA vs OID+Large Object?

    - by mlaverd
    I started an application with Hibernate 3.2 and PostgreSQL 8.4. I have some byte[] fields that were mapped as @Basic (= PG bytea) and others that got mapped as @Lob (=PG Large Object). Why the inconsistency? Because I was a Hibernate noob. Now, those fields are max 4 Kb (but average is 2-3 kb). The PostgreSQL documentation mentioned that the LOs are good when the fields are big, but I didn't see what 'big' meant. I have upgraded to PostgreSQL 9.0 with Hibernate 3.6 and I was stuck to change the annotation to @Type(type="org.hibernate.type.PrimitiveByteArrayBlobType"). This bug has brought forward a potential compatibility issue, and I eventually found out that Large Objects are a pain to deal with, compared to a normal field. So I am thinking of changing all of it to bytea. But I am concerned that bytea fields are encoded in Hex, so there is some overhead in encoding and decoding, and this would hurt the performance. Are there good benchmarks about the performance of both of these? Anybody has made the switch and saw a difference?

    Read the article

  • Making a 64 bit shared library that dynamically links to a 32 bit library on Mac OS X Snow Leopard

    - by carneades
    Update: After some more reading I see that this problem is totally general, you can't mix architectures in the same process, so 64 bit Java cannot dlopen() a 32 bit library like FMOD. Is there any possible workaround for this, keeping in mind I'm writing my own C interface to the FMOD library? I need to make a 64-bit dylib on Max OS X because Java Native Access only likes 64-bit libraries on 64-bit machines. The problem is, my C source code dynamically includes FMOD which on Mac only provides 32-bit dylibs. When I try to compile without the -m32 option (since I must output a 64-bit dylib) I get the following error: gcc -dynamiclib -std=c99 -pedantic -Wall -O3 -fPIC -pthread -o ../bin/libpenntotalrecall_fmod.dylib ../../src/libpenntotalrecall_fmod.c -lfmodex -L../../lib/osx/ ld: warning: in /usr/lib/libfmodex.dylib, missing required architecture x86_64 in file Undefined symbols: "_FMOD_System_CreateSound", referenced from: _startPlayback in ccJnlwrd.o "_FMOD_Channel_GetPosition", referenced from: _streamPosition in ccJnlwrd.o "_FMOD_System_Create", referenced from: _startPlayback in ccJnlwrd.o "_FMOD_System_PlaySound", referenced from: _startPlayback in ccJnlwrd.o "_FMOD_Sound_Release", referenced from: _stopPlayback in ccJnlwrd.o "_FMOD_Channel_IsPlaying", referenced from: _playbackInProgress in ccJnlwrd.o "_FMOD_System_Update", referenced from: _streamPosition in ccJnlwrd.o _startPlayback in ccJnlwrd.o "_FMOD_Channel_SetPaused", referenced from: _startPlayback in ccJnlwrd.o "_FMOD_System_Release", referenced from: _stopPlayback in ccJnlwrd.o "_FMOD_System_Init", referenced from: _startPlayback in ccJnlwrd.o "_FMOD_Channel_SetVolume", referenced from: _startPlayback in ccJnlwrd.o "_FMOD_System_Close", referenced from: _stopPlayback in ccJnlwrd.o "_FMOD_Channel_SetCallback", referenced from: _startPlayback in ccJnlwrd.o ld: symbol(s) not found collect2: ld returned 1 exit status make: *** [all] Error 1 Shouldn't it be possible to get a 64 bit dylib from my source code that dynamically includes 32 bit libraries?!

    Read the article

  • Using NHibernate to select entities based on activity of children entities

    - by mannish
    I'm having a case of the Mondays... I need to select blog posts based on recent activity in the post's comments collection (a Post has a List<Comment> property and likewise, a Comment has a Post property, establishing the relationship. I don't want to show the same post twice, and I only need a subset of the entities, not all of the posts. First thought was to grab all posts that have comments, then order those based on the most recent comment. For this to work, I'm pretty sure I'd have to limit the comments for each Post to the first/newest Comment. Last I'd simply take the top 5 (or whatever max results number I want to pass into the method). Second thought would be to grab all of the comments, ordered by CreatedOn, and filter so there's only one Comment per Post. Then return those top (whatever) posts. This seems like the same as the first option, just going through the back door. I've got an ugly, two query option I've got working with some LINQ on the side for filtering, but I know there's a more elegant way to do it in using the NHibernate API. Hoping to see some good ideas here.

    Read the article

  • How to modify the style of jQuery DatePicker's disabled dates?

    - by Clay
    Given this page: http://jqueryui.com/demos/datepicker/#min-max And viewing its source: http://jqueryui.com/themeroller/css/parseTheme.css.php I can change the following line (using Chrome's inspect element feature) and see those changes reflected: .ui-state-disabled, .ui-widget-content .ui-state-disabled { opacity: .35; filter:Alpha(Opacity=35); background-image: none; } However, if I try to override my own test page with something like... .ui-state-disabled, .ui-widget-content .ui-state-disabled { opacity: .99 !important; filter:Alpha(Opacity=99) !important; background-image: none !important; color:Red !important; } ...I do not see my changes reflected in the calendar. I can make other changes in my own test page and those are reflected for other classes in the datepicker. So, I'm not having any kind of path issue to the .js or .css files. What am I missing here? UPDATE/SOLUTION Firebug to the rescue...this took care of my styling needs: .ui-datepicker-week-end{color: #c0c0c0 !important;} div#ui-datepicker-div.ui-datepicker{color: #c0c0c0;} div#ui-datepicker-div.ui-datepicker:hover{cursor: default important;} .ui-datepicker-calendar th{color: #222222 !important;}

    Read the article

  • Perl to Ruby conversion (multidimensional arrays)

    - by Alex
    I'm just trying to get my head around a multidimensional array creation from a perl script i'm currently converting to Ruby, I have 0 experience in Perl, as in i opened my first Perl script this morning. Here is the original loop: my $tl = {}; for my $zoom ($zoommin..$zoommax) { my $txmin = lon2tilex($lonmin, $zoom); my $txmax = lon2tilex($lonmax, $zoom); # Note that y=0 is near lat=+85.0511 and y=max is near # lat=-85.0511, so lat2tiley is monotonically decreasing. my $tymin = lat2tiley($latmax, $zoom); my $tymax = lat2tiley($latmin, $zoom); my $ntx = $txmax - $txmin + 1; my $nty = $tymax - $tymin + 1; printf "Schedule %d (%d x %d) tiles for zoom level %d for download ...\n", $ntx*$nty, $ntx, $nty, $zoom unless $opt{quiet}; $tl->{$zoom} = []; for my $tx ($txmin..$txmax) { for my $ty ($tymin..$tymax) { push @{$tl->{$zoom}}, { xyz => [ $tx, $ty, $zoom ] }; } } } and what i have so far in Ruby: tl = [] for zoom in zoommin..zoommax txmin = cm.tiles.xtile(lonmin,zoom) txmax = cm.tiles.xtile(lonmax,zoom) tymin = cm.tiles.ytile(latmax,zoom) tymax = cm.tiles.ytile(latmin,zoom) ntx = txmax - txmin + 1 nty = tymax - tymin + 1 tl[zoom] = [] for tx in txmin..txmax for ty in tymin..tymax tl[zoom] << xyz = [tx,ty,zoom] puts tl end end end The part i'm unsure of is nested right at the root of the loops, push @{$tl->{$zoom}},{ xyz => [ $tx, $ty, $zoom ] }; I'm sure this will be very simple for a seasoned Perl programmer, thanks! `

    Read the article

  • Browser back button restores empty fields

    - by Pierre
    I have a web page x.php (in a password protected area of my web site) which has a form and a button which uses the POST method to send the form data and opens x.php#abc. This works pretty well. However, if the users decides to navigate back in Internet Explorer 7, all the fields in the original x.php get cleared and everything must be typed in again. I cannot save the posted information in a session and I am trying to understand how I can get IE7 to behave the way I want. I've searched the web and found answers which suggest that the HTTP header should contain explicit caching information. Currently, I've tried this : session_name("FOO"); session_start(); header("Pragma: public"); header("Expires: Fri, 7 Nov 2008 23:00:00 GMT"); header("Cache-Control: public, max-age=3600, must-revalidate"); header("Last-Modified: Thu, 30 Oct 2008 17:00:00 GMT"); and variations thereof. Without success. Looking at the returned headers with a tool such as WireShark shows me that Apache is indeed honouring my headers. So my question is: what am I doing wrong?

    Read the article

  • Java Swing Table questions

    - by Dalton Conley
    Hey guys, working on an event calendar. I'm having some trouble getting my column heads to display.. here is the code private JTable calendarTable; private DefaultTableModel calendarTableModel; final private String [] days = {"Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"}; ////////////////////////////////////////////////////////////////////// /* Setup the actual calendar table */ calendarTableModel = new DefaultTableModel() { public boolean isCellEditable(int row, int col){ return false; } }; // setup columns for(int i = 0; i < 7; i++) calendarTableModel.addColumn(days[i]); calendarTable = new JTable(calendarTableModel); calendarTable.getTableHeader().setResizingAllowed(false); calendarTable.getTableHeader().setReorderingAllowed(false); calendarTable.setColumnSelectionAllowed(true); calendarTable.setRowSelectionAllowed(true); calendarTable.setSelectionMode(ListSelectionModel.SINGLE_SELECTION); calendarTable.setRowHeight(105); calendarTableModel.setColumnCount(7); calendarTableModel.setRowCount(6); Also, Im sort of new with tables.. how can I make the rowHeight split between the max size of the table?

    Read the article

  • Struggling with a data modeling problem

    - by rpat
    I am struggling with a data model (I use MySQL for the database). I am uneasy about what I have come up with. If someone could suggest a better approach, or point me to some reference matter I would appreciate it. The data would have organizations of many types. I am trying to do a 3 level classification (Class, Category, Type). Say if I have 'Italian Restaurant', it will have the following classification Food Services Restaurants Italian However, an organization may belong to multiple groups. A restaurant may also serve Chinese and Italian. So it will fit into 2 classifications Food Services Restaurants Italian Food Services Restaurants Chinese The classification reference tables would be like the following: ORG_CLASS (RowId, ClassCode, ClassName) 1, FOOD, Food Services ORG_CATEGORY(RowId, ClassCode, CategoryCode, CategoryName) 1, FOOD, REST, Restaurants ORG_TYPE (RowId, ClassCode, CategoryCode, TypeCode, TypeName) 100, FOOD, REST, ITAL, Italian 101, FOOD, REST, CHIN, Chinese 102, FOOD, REST, SPAN, Spanish 103, FOOD, REST, MEXI, Mexican 104, FOOD, REST, FREN, French 105, FOOD, REST, MIDL, Middle Eastern The actual data tables would be like the following: I will allow an organization a max of 3 classifications. I will have 3 GroupIds each pointing to a row in ORG_TYPE. So I have my ORGANIZATION_TABLE ORGANIZATION_TABLE (OrgGroupId1, OrgGroupId2, OrgGroupId3, OrgName, OrgAddres) 100,103,NULL,MyRestaurant1, MyAddr1 100,102,NULL,MyRestaurant2, MyAddr2 100,104,105, MyRestaurant3, MyAddr3 During data add, a dialog could let the user choose the clssa, category, type and the corresponding GroupId could be populated with the rowid from the ORG_TYPE table. During Search, If all three classification are chosen, It will be more specific. For example, if Food Services Restaurants Italian is the criteria, the where clause would be 'where OrgGroupId1 = 100' If only 2 levels are chosen Food Services Restaurants I have to do 'where OrgGroupId1 in (100,101,102,103,104,105, .....)' - There could be a hundred in that list I will disallow class level search. That is I will force selection of a class and category The Ids would be integers. I am trying to see performance issues and other issues. Overall, would this work? or I need to throw this out and start from scratch.

    Read the article

  • Iterating through String word at a time in Python

    - by AlgoMan
    I have a string buffer of a huge text file. I have to search a given words/phrases in the string buffer. Whats the efficient way to do it ? I tried using re module matches. But As i have a huge text corpus that i have to search through. This is taking large amount of time. Given a Dictionary of words and Phrases. I iterate through the each file, read that into string , search all the words and phrases in the dictionary and increment the count in the dictionary if the keys are found. One small optimization that we thought was to sort the dictionary of phrases/words with the max number of words to lowest. And then compare each word start position from the string buffer and compare the list of words. If one phrase is found, we don search for the other phrases (as it matched the longest phrase ,which is what we want) Can some one suggest how to go about word by word in the string buffer. (Iterate string buffer word by word) ? Also, Is there any other optimization that can be done on this ?

    Read the article

  • Liferay: Customise the web.xml HeaderFilter added during portlet deloyment

    - by gid
    I need to customise the deployment of my liferay portlet such that the GWT nocache.js files don't get a 'Expires' HTTP header set. My war file looks like this: view.jsp com.foobar.MyEntryPoint/com.foobar.MyEntryPoint.nocache.js com.foobar.MyEntryPoint/12312312313213123123123.cache.html WEB-INF/web.xml WEB-INF/portlet.xml WEB-INF/liferay-portlet.xml ... etc my web.xml is pretty much empty (only has the displayName) On deployment this is rewritten my liferay to have a series of filters in particalar: Header Filter com.liferay.portal.kernel.servlet.PortalClassLoaderFilter filter-class com.liferay.portal.servlet.filters.header.HeaderFilter Cache-Control max-age=315360000, public Expires 315360000 Header Filter *.js This filter adds an Expires header for about 2020 to the .nocache.js js files... the trouble is these files really shouldn't be cached (the hint is in the name :) For development purposes I have worked around this by disabling the filter using: com.liferay.portal.servlet.filters.header.HeaderFilter=false in portal-ext.properties globaly. What I what I would like to to is one of the following: Disable HeaderFilter only for this portlet or war file. I can always add my own expires Add an init-param to the HeaderFilter to match anything other than .nocache.js files Any ideas how either of these things could be achieved? Stack: liferay-6.0.1 CE, Windows 7, java 1.6.0_18, GWT 2.0.3

    Read the article

  • How to pass a value from a method to property procedure in c#?

    - by sameer
    Here is my code: The jewellery class is my main class in which i am inheriting a connection string class. class Jewellery : Connectionstr { string lmcode; public string LM_code/**/Here i want to access the value of the method ReadData i.e displaystring and i want to store this value in the insert query below.** { get { return lmcode; } set { lmcode = value; } } string mname; public string M_Name { get { return mname; } set { mname = value; } } string desc; public string Desc { get { return desc; } set { desc = value; } } public string ReadData() { OleDbDataReader dr; string jid = string.Empty; string displayString = string.Empty; String query = "select max(LM_code)from Master_Accounts"; Datamanager.RunExecuteReader(Constr, query); if (dr.Read()) { jid = dr[0].ToString(); if (string.IsNullOrEmpty(jid)) { jid = "AM0000"; } int len = jid.Length; string split = jid.Substring(2, len - 2); int num = Convert.ToInt32(split); num++; displayString = jid.Substring(0, 2) + num.ToString("0000"); dr.Close(); } **return displayString;** I want to pass this value to the above property procedure above i.e LM_code. } public void add() { String query ="insert into Master_Accounts values ('" + LM_code + "','" + M_Name + "'," + "'" + Desc + "')"; Datamanager.RunExecuteNonQuery(Constr , query);// } If possible can u edit this code! Anticipated thanks by sameer

    Read the article

  • UTF-8 to Unicode conversion

    - by sandeep
    Hi, I am having problems with converting UTF-8 to Unicode. Below is the code: int charset_convert( char * string, char * to_string,char* charset_from, char* charset_to) { char *from_buf, *to_buf, *pointer; size_t inbytesleft, outbytesleft, ret; size_t TotalLen; iconv_t cd; if (!charset_from || !charset_to || !string) /* sanity check */ return -1; if (strlen(string) < 1) return 0; /* we are done, nothing to convert */ cd = iconv_open(charset_to, charset_from); /* Did I succeed in getting a conversion descriptor ? */ if (cd == (iconv_t)(-1)) { /* I guess not */ printf("Failed to convert string from %s to %s ", charset_from, charset_to); return -1; } from_buf = string; inbytesleft = strlen(string); /* allocate max sized buffer, assuming target encoding may be 4 byte unicode */ outbytesleft = inbytesleft *4 ; pointer = to_buf = (char *)malloc(outbytesleft); memset(to_buf,0,outbytesleft); memset(pointer,0,outbytesleft); ret = iconv(cd, &from_buf, &inbytesleft, &pointer, &outbytesleft);ing memcpy(to_string,to_buf,(pointer-to_buf); } main(): int main() { char UTF []= {'A', 'B'}; char Unicode[1024]= {0}; char* ptr; int x=0; iconv_t cd; charset_convert(UTF,Unicode,"UTF-8","UNICODE"); ptr = Unicode; while(*ptr != '\0') { printf("Unicode %x \n",*ptr); ptr++; } return 0; } It should give A and B but i am getting: ffffffff fffffffe 41 Thanks, Sandeep

    Read the article

  • Python MD5 Hash Faster Calculation

    - by balgan
    Hi everyone. I will try my best to explain my problem and my line of thought on how I think I can solve it. I use this code for root, dirs, files in os.walk(downloaddir): for infile in files: f = open(os.path.join(root,infile),'rb') filehash = hashlib.md5() while True: data = f.read(10240) if len(data) == 0: break filehash.update(data) print "FILENAME: " , infile print "FILE HASH: " , filehash.hexdigest() and using start = time.time() elapsed = time.time() - start I measure how long it takes to calculate an hash. Pointing my code to a file with 653megs this is the result: root@Mars:/home/tiago# python algorithm-timer.py FILENAME: freebsd.iso FILE HASH: ace0afedfa7c6e0ad12c77b6652b02ab 12.624 root@Mars:/home/tiago# python algorithm-timer.py FILENAME: freebsd.iso FILE HASH: ace0afedfa7c6e0ad12c77b6652b02ab 12.373 root@Mars:/home/tiago# python algorithm-timer.py FILENAME: freebsd.iso FILE HASH: ace0afedfa7c6e0ad12c77b6652b02ab 12.540 Ok now 12 seconds +- on a 653mb file, my problem is I intend to use this code on a program that will run through multiple files, some of them might be 4/5/6Gb and it will take wayy longer to calculate. What am wondering is if there is a faster way for me to calculate the hash of the file? Maybe by doing some multithreading? I used a another script to check the use of the CPU second by second and I see that my code is only using 1 out of my 2 CPUs and only at 25% max, any way I can change this? Thank you all in advance for the given help.

    Read the article

  • Why is it assumed that send may return with less than requested data transmitted on a blocking socke

    - by Ernelli
    The standard method to send data on a stream socket has always been to call send with a chunk of data to write, check the return value to see if all data was sent and then keep calling send again until the whole message has been accepted. For example this is a simple example of a common scheme: int send_all(int sock, unsigned char *buffer, int len) { int nsent; while(len 0) { nsent = send(sock, buffer, len, 0); if(nsent == -1) // error return -1; buffer += nsent; len -= nsent; } return 0; // ok, all data sent } Even the BSD manpage mentions that ...If no messages space is available at the socket to hold the message to be transmitted, then send() normally blocks... Which indicates that we should assume that send may return without sending all data. Now I find this rather broken but even W. Richard Stevens assumes this in his standard reference book about network programming, not in the beginning chapters, but the more advanced examples uses his own writen (write all data) function instead of calling write. Now I consider this still to be more or less broken, since if send is not able to transmit all data or accept the data in the underlying buffer and the socket is blocking, then send should block and return when the whole send request has been accepted. I mean, in the code example above, what will happen if send returns with less data sent is that it will be called right again with a new request. What has changed since last call? At max a few hundred CPU cycles have passed so the buffer is still full. If send now accepts the data why could'nt it accept it before? Otherwise we will end upp with an inefficient loop where we are trying to send data on a socket that cannot accept data and keep trying, or else? So it seems like the workaround, if needed, results in heavily inefficient code and in those circumstances blocking sockets should be avoided at all an non blocking sockets together with select should be used instead.

    Read the article

  • Algorithm to see if keywords exist inside a string

    - by rksprst
    Let's say I have a set of keywords in an array {"olympics", "sports tennis best", "tennis", "tennis rules"} I then have a large list (up to 50 at a time) of strings (or actually tweets), so they are a max of 140 characters. I want to look at each string and see what keywords are present there. In the case where a keyword is composed of multiple words like "sports tennis best", the words don't have to be together in the string, but all of them have to show up. I've having trouble figuring out an algorithm that does this efficiently. Do you guys have suggestions on a way to do this? Thanks! Edit: To explain a bit better each keyword has an id associated with it, so {1:"olympics", 2:"sports tennis best", 3:"tennis", 4:"tennis rules"} I want to go through the list of strings/tweets and see which group of keywords match. The output should be, this tweet belongs with keyword #4. (multiple matches may be made, so anything that matches keyword 2, would also match 3 -since they both contain tennis). When there are multiple words in the keyword, e.g. "sports tennis best" they don't have to appear together but have to all appear. e.g. this will correctly match: "i just played tennis, i love sports, its the best"... since this string contains "sports tennis best" it will match and be associated with the keywordID (which is 2 for this example).

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >