Search Results

Search found 1976 results on 80 pages for 'nic wise'.

Page 70/80 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Linux Lightweight Distro and X Windows for Development

    - by Fernando Barrocal
    Heyall... I want to build a lightweight linux configuration to use for development. The first idea is to use it inside a Virtual Machine under Windows, or old Laptops with 1Gb RAM top. Maybe even a distributable environment for developers. So the whole idea is to use a LAMP server, Java Application Server (Tomcat or Jetty) and X Windows (any Window manager, from FVWM to Enlightment), Eclipse, maybe jEdit and of course Firefox. Edit: I am changing this post to compile a possible list of distros and window managers that can be used to configure a real lightweight development environment. I am using as base personal experiences on this matter. Info about the distros can be easily found in their sites. So please, focus on personal use of those systems Distros Ubuntu / Xubuntu Pros: Personal Experience in old systems or low RAM environment - @Schroeder, @SCdF Several sugestions based on personal knowledge - @Kyle, @Peter Hoffmann Gentoo Pros: Not targeted to Desktop Users - @paan Don't come with a huge ammount of applications - @paan Slackware Pros: Suggested as best performance in a wise install/configuration - @Ryan Damn Small Linux Pros: Main focus is the lightweight factor - 50MB LiveCD - @Ryan Debian Pros: Very versatile, can be configured for both heavy and lightweight computers - @Ryan APT as package manager - @Kyle Based on compatibility and usability - @Kyle -- Fell Free to add Prós and Cons on this, so we can compile a good Reference. -- X Windows suggestion keep coming about XFCE. If others are to add here, open a session for it Like the distro one :)

    Read the article

  • How to speed up marching cubes?

    - by Dan Vinton
    I'm using this marching cube algorithm to draw 3D isosurfaces (ported into C#, outputting MeshGeomtry3Ds, but otherwise the same). The resulting surfaces look great, but are taking a long time to calculate. Are there any ways to speed up marching cubes? The most obvious one is to simply reduce the spatial sampling rate, but this reduces the quality of the resulting mesh. I'd like to avoid this. I'm considering a two-pass system, where the first pass samples space much more coarsely, eliminating volumes where the field strength is well below my isolevel. Is this wise? What are the pitfalls? Edit: the code has been profiled, and the bulk of CPU time is split between the marching cubes routine itself and the field strength calculation for each grid cell corner. The field calculations are beyond my control, so speeding up the cubes routine is my only option... I'm still drawn to the idea of trying to eliminate dead space, since this would reduce the number of calls to both systems considerably.

    Read the article

  • Organizing code, logical layout of segmented files

    - by David H
    I have known enough about programming to get me in trouble for about 10 years now. I have no formal education, though I've read many books on the subject for various languages. The language I am primarily focused on now would be php, atleast for the scale of things I am doing now. I have used some OOP classes for a while, but never took the dive into understanding principals behind the scenes. I am still not at the level I would like to be expression-wise...however my recent reading into a book titled The OOP Thought Process has me wanting to advance my programming skills. With motivation from the new concepts, I have started with a new project that I've coded some re-usable classes that deal with user auth, user profiles, database interfacing, and some other stuff I use regularly on most projects. Now having split my typical garbled spaghetti bowl mess of code into somewhat organized files, I've come into some problems when it comes to making sure files are all included when they need to be, and how to logically divide the scripts up into classes, aswell as how segmented I should be making each class. I guess I have rambled on enough about much of nothing, but what I am really asking for is advise from people, or suggested reading that focuses not on specific functions and formats of code, but the logical layout of projects that are larger than just a hobby project. I want to learn how to do things proper, and while I am still learning in some areas, this is something that I have no clue about other than just being creative, and trial/error. Mostly error. Thanks for any replies. This place is great.

    Read the article

  • even PHP has 'bugs' with IE

    - by silversky
    It's not a real bug BUT for sure it is not what you would expect. I have this sample code to upload images: <?php if($type=="image/jpg" || $type=="image/jpeg" || $type=="image/pjpeg" || $type=="image/tiff" || $type=="image/gif" || $type=="image/png") { // make upload else echo "Incorect format ...."; ?> The problem is that that if I modify the extention of an image, let's say to .jpgq or even .jpg% and i try to upload it FF and Chrome will say that the file"s type is "application/octet-stream" and normaly the condition will be false BUT since IE is 'smarter' that other brow. it will say that the file is "image/pjeg and the condition will be true and the file will be uploaded and of course latter any brow. will not be able to read / view the image. It is not a bug because on msdn.microsoft.com it says that: "If the "suggested" (server-provided) MIME type is unknown (not known and not ambiguous), FindMimeFromData immediately returns this MIME type" and "If the server-provided MIME type is either known or ambiguous, the buffer is scanned in an attempt to verify or obtain a MIME type from the actual content." plus others 'inovative solutions from Microsoft'. SO my questions are: Why is IE so 'smart' and when I upload the file to server it knows the real MIME type BUT it will fail to read it from the server ? How can i work around this issue (if the file doesn't have the right extention the condition has to be false)? Is it wise to check the extention format (and not the MIME type)? is any of the above extention not recomended to use ? Should I add others?

    Read the article

  • Are there any radix/patricia/critbit trees for Python?

    - by Andrew Dalke
    I have about 10,000 words used as a set of inverted indices to about 500,000 documents. Both are normalized so the index is a mapping of integers (word id) to a set of integers (ids of documents which contain the word). My prototype uses Python's set as the obvious data type. When I do a search for a document I find the list of N search words and their corresponding N sets. I want to return the set of documents in the intersection of those N sets. Python's "intersect" method is implemented as a pairwise reduction. I think I can do better with a parallel search of sorted sets, so long as the library offers a fast way to get the next entry after i. I've been looking for something like that for some time. Years ago I wrote PyJudy but I no longer maintain it and I know how much work it would take to get it to a stage where I'm comfortable with it again. I would rather use someone else's well-tested code, and I would like one which supports fast serialization/deserialization. I can't find any, or at least not any with Python bindings. There is avltree which does what I want, but since even the pair-wise set merge take longer than I want, I suspect I want to have all my operations done in C/C++. Do you know of any radix/patricia/critbit tree libraries written as C/C++ extensions for Python? Failing that, what is the most appropriate library which I should wrap? The Judy Array site hasn't been updated in 6 years, with 1.0.5 released in May 2007. (Although it does build cleanly so perhaps It Just Works.)

    Read the article

  • Database design grouping contacts by lists and companies

    - by Serge
    Hi, I'm wondering what would be the best way to group contacts by their company. Right now a user can group their contacts by custom created lists but I'd like to be able to group contacts by their company as well as store the contact's position (i.e. Project Manager of XYZ company). Database wise this is what I have for grouping contacts into lists contact [id_contact] [int] PK NOT NULL, [lastName] [varchar] (128) NULL, [firstName] [varchar] (128) NULL, ...... contact_list [id_contact] [int] FK, [id_list] [int] FK, list [id_list] [int] PK [id_user] [int] FK [list_name] [varchar] (128) NOT NULL, [description] [TEXT] NULL Should I implement something similar for grouping contacts by company? If so how would I store the contact's position in that company and how can I prevent data corruption if a user modifies a contact's company name. For instance John Doe changed companies but the other co-workers are still in the old company. I doubt that will happen often (might not even happen at all) but better be safe than sorry. I'm also keeping an audit trail so in a way the contact would still need to be linked to the old company as well as the new one but without confusing what company he's actually working at the moment. I hope that made sense... Has anyone encountered such a problem? UPDATE Would something like this make sense contact_company [id_contact_company] [int] PK [id_contact] [int] FK [id_company] [int] FK [contact_title] [varchar] (128) company [id_company] [int] PK NOT NULL, [company_name] [varchar] (128) NULL, [company_description] [varchar] (300) NULL, [created_date] [datetime] NOT NULL This way a contact can work for more than one company and contacts can be grouped by companies

    Read the article

  • Wildcard subdomain .htaccess and Codeigniter

    - by Gautam
    Hi All, I am trying to create the proper .htaccess that would allow me to map as such: http://domain.com/ --> http://domain.com/home http://domain.com/whatever --> http://domain.com/home/whatever http://user.domain.com/ --> http://domain.com/user http://user.domain.com/whatever --> http://domain.com/user/whatever/ Here, someone would type in the above URLs, however internally, it would be redirecting as if it were the URL on the right. Also the subdomain would be dynamic (that is, http://user.domain.com isn't an actual subdomain but would be a .htaccess rewrite) Also /home is my default controller so no subdomain would internally force it to /home controller and any paths following it (as shown in #2 example above) would be the (catch-all) function within that controller. Like wise if a subdomain is passed it would get passed as a (catch-all) controller along with any (catch-all) functions for it (as shown in #4 example above) Hopefully I'm not asking much here but I can't seem to figure out the proper .htaccess or routing rules (in Codeigniter) for this. httpd.conf and hosts are setup just fine. EDIT #1 Here's my .htaccess that is coming close but is messing up at some point: RewriteEngine On RewriteBase / RewriteCond %{SCRIPT_FILENAME} !-d RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{HTTP_HOST} ^([a-z0-9-]+).domain [NC] RewriteRule (.*) index.php/%1/$1 [QSA] RewriteCond $1 !^(index\.php|images|robots\.txt) RewriteRule ^(.*)$ /index.php/$1 [L,QSA] With the above, when I visit: http://test.domain/abc/123 this is what I notice in $_SERVER var (I've removed some of the fields): Array ( [REDIRECT_STATUS] => 200 [SERVER_NAME] => test.domain [REDIRECT_URL] => /abc/123 [QUERY_STRING] => [REQUEST_URI] => /abc/123 [SCRIPT_NAME] => /index.php [PATH_INFO] => /test/abc/123 [PATH_TRANSLATED] => redirect:\index.php\test\test\abc\123\abc\123 [PHP_SELF] => /index.php/test/abc/123 ) You can see the PATH_TRANSLATED is not properly being formed and I think that may be screwing things up?

    Read the article

  • SQL Server Clustered Index: (Physical) Data Page Order

    - by scherand
    I am struggling understanding what a clustered index in SQL Server 2005 is. I read the MSDN article Clustered Index Structures (among other things) but I am still unsure if I understand it correctly. The (main) question is: what happens if I insert a row (with a "low" key) into a table with a clustered index? The above mentioned MSDN article states: The pages in the data chain and the rows in them are ordered on the value of the clustered index key. And Using Clustered Indexes for example states: For example, if a record is added to the table that is close to the beginning of the sequentially ordered list, any records in the table after that record will need to shift to allow the record to be inserted. Does this mean that if I insert a row with a very "low" key into a table that already contains a gazillion rows literally all rows are physically shifted on disk? I cannot believe that. This would take ages, no? Or is it rather (as I suspect) that there are two scenarios depending on how "full" the first data page is. A) If the page has enough free space to accommodate the record it is placed into the existing data page and data might be (physically) reordered within that page. B) If the page does not have enough free space for the record a new data page would be created (anywhere on the disk!) and "linked" to the front of the leaf level of the B-Tree? This would then mean the "physical order" of the data is restricted to the "page level" (i.e. within a data page) but not to the pages residing on consecutive blocks on the physical hard drive. The data pages are then just linked together in the correct order. Or formulated in an alternative way: if SQL Server needs to read the first N rows of a table that has a clustered index it can read data pages sequentially (following the links) but these pages are not (necessarily) block wise in sequence on disk (so the disk head has to move "randomly"). How close am I? :)

    Read the article

  • My First F# program

    - by sudaly
    Hi I just finish writing my first F# program. Functionality wise the code works the way I wanted, but not sure if the code is efficient. I would much appreciate if someone could review the code for me and point out the areas where the code can be improved. Thanks Sudaly open System open System.IO open System.IO.Pipes open System.Text open System.Collections.Generic open System.Runtime.Serialization [<DataContract>] type Quote = { [<field: DataMember(Name="securityIdentifier") >] RicCode:string [<field: DataMember(Name="madeOn") >] MadeOn:DateTime [<field: DataMember(Name="closePrice") >] Price:float } let m_cache = new Dictionary<string, Quote>() let ParseQuoteString (quoteString:string) = let data = Encoding.Unicode.GetBytes(quoteString) let stream = new MemoryStream() stream.Write(data, 0, data.Length); stream.Position <- 0L let ser = Json.DataContractJsonSerializer(typeof<Quote array>) let results:Quote array = ser.ReadObject(stream) :?> Quote array results let RefreshCache quoteList = m_cache.Clear() quoteList |> Array.iter(fun result->m_cache.Add(result.RicCode, result)) let EstablishConnection() = let pipeServer = new NamedPipeServerStream("testpipe", PipeDirection.InOut, 4) let mutable sr = null printfn "[F#] NamedPipeServerStream thread created, Wait for a client to connect" pipeServer.WaitForConnection() printfn "[F#] Client connected." try // Stream for the request. sr <- new StreamReader(pipeServer) with | _ as e -> printfn "[F#]ERROR: %s" e.Message sr while true do let sr = EstablishConnection() // Read request from the stream. printfn "[F#] Ready to Receive data" sr.ReadLine() |> ParseQuoteString |> RefreshCache printfn "[F#]Quot Size, %d" m_cache.Count let quot = m_cache.["MSFT.OQ"] printfn "[F#]RIC: %s" quot.RicCode printfn "[F#]MadeOn: %s" (String.Format("{0:T}",quot.MadeOn)) printfn "[F#]Price: %f" quot.Price

    Read the article

  • C#JSON serialization

    - by Bridget the Midget
    I'm trying out the HighStock library for creating stock charts. To fill the chart with data, their example specifies this source. The first parameter is unixtime in milliseconds and the second parameter is the stock closing price. I don't know if this is valid json, but I would argue that the following would be a more appropriate way of writing json. [{"Closing":63.15000,"Date":1262559600000},{"Closing":64.75000,"Date":1262646000000}, ... I guess that I have no other option than to adapt to HighStocks syntax. I could solve this by looping and add correct syntax to a string, but that seems rudimentary. Would it be more wise to serialize C# objects to create my json, and if that's the case - how can I reach the syntax specified in the example? Lets just say this is my c# object: public class Quote { public double Date { get; set; } public decimal Closing { get; set; } } Am I making it unnecessary complex? Should I just format a json string?

    Read the article

  • implementing a download manager that supports resuming

    - by Idan K
    hi, I intend on writing a small download manager in C++ that supports resuming (and multiple connections per download). From the info I gathered so far, when sending the http request I need to add a header field with a key of "Range" and the value "bytes=startoff-endoff". Then the server returns a http response with the data between those offsets. So roughly what I have in mind is to split the file to the number of allowed connections per file and send a http request per splitted part with the appropriate "Range". So if I have a 4mb file and 4 allowed connections, I'd split the file to 4 and have 4 http requests going, each with the appropriate "Range" field. Implementing the resume feature would involve remembering which offsets are already downloaded and simply not request those. Is this the right way to do this? What if the web server doesn't support resuming? (my guess is it will ignore the "Range" and just send the entire file) When sending the http requests, should I specify in the range the entire splitted size? Or maybe ask smaller pieces, say 1024k per request? When reading the data, should I write it immediately to the file or do some kind of buffering? I guess it could be wasteful to write small chunks. Should I use a memory mapped file? If I remember correctly, it's recommended for frequent reads rather than writes (I could be wrong). Is it memory wise? What if I have several downloads simultaneously? If I'm not using a memory mapped file, should I open the file per allowed connection? Or when needing to write to the file simply seek? (if I did use a memory mapped file this would be really easy, since I could simply have several pointers). Note: I'll probably be using Qt, but this is a general question so I left code out of it.

    Read the article

  • what webserver / mod / technique should I use to serve everything from memory?

    - by reinier
    I've lots of lookuptables from which I'll generate my webresponse. I think IIS with Asp.net enables me to keep static lookuptables in memory which I can use to serve up my responses very fast. Are there however also non .net solutions which can do the same? I've looked at fastcgi, but I think this starts X processes, of which anyone can handle Y requests. But the processes are by definition shielded from eachother. I could configure fastcgi to use just 1 process, but does this have scalability implications? anything using PHP or any other interpreted language won't fly because it is also cgi or fastcgi bound right? I understand memcache could be an option, though this would require another (local) socket connection which I'd rather avoid since everything in memory would be much faster. The solution can work under WIndows or Unix... it doesn't matter too much. The only thing which matters is that there will be a lot of requests (100/sec now and growing to 500/sec in a year), and I want to reduce the amount of webservers needed to process it. The current solution is done using PHP and memcache (and the occasional hit to the SQL server backend). Although it is fast (for php anyway), Apache has real problems when the 50/sec is passed. I've put a bounty on this question since I've not seen enough responses to make a wise choice. At the moment I'm considering either Asp.net or fastcgi with C(++).

    Read the article

  • How does PHP PDO work internally ?

    - by Rachel
    I want to use pdo in my application, but before that I want to understand how internally PDOStatement->fetch and PDOStatement->fetchAll. For my application, I want to do something like "SELECT * FROM myTable" and insert into csv file and it has around 90000 rows of data. My question is, if I use PDOStatement->fetch as I am using it here: // First, prepare the statement, using placeholders $query = "SELECT * FROM tableName"; $stmt = $this->connection->prepare($query); // Execute the statement $stmt->execute(); var_dump($stmt->fetch(PDO::FETCH_ASSOC)); while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { echo "Hi"; // Export every row to a file fputcsv($data, $row); } Will after every fetch from database, result for that fetch would be store in memory ? Meaning when I do second fetch, memory would have data of first fetch as well as data for second fetch. And so if I have 90000 rows of data and if am doing fetch every time than memory is being updated to take new fetch result without removing results from previous fetch and so for the last fetch memory would already have 89999 rows of data. Is this how PDOStatement::fetch works ? Performance wise how does this stack up against PDOStatement::fetchAll ?

    Read the article

  • Efficient way to create a large number of SharePoint folders

    - by BeraCim
    Hi all: I'm currently creating a large number of SharePoint folders within a list (e.g. ~800 folders), with each folder containing a different number of items. The way it is currently done is that it programmatically reads off the content types, items, event listeners and the likes off the same folder from another web, then creates the same folder in the current web. That ran reasonably fine and fast on a dev environment. However when it goes to an environment with WFEs and farms, it slowed down a lot. I have checked that there are no leaks in the code, and that the code follows SharePoint coding best practices. At the moment I'm looking at it at the code level. From your experience, are there any efficient ways of creating a large number of SharePoint folders, lists and items? EDIT: I'm currently using SharePoint API, but will be looking at moving to using Web Service in the future. I'm interested in looking at both options though. Code wise, its just the general reading of a folder and its content types plus items and their details, then create the same folder in the same list with the same content types, then copy over the items using patch update. I want to know whether there are more efficient ways of doing the above. Thanks.

    Read the article

  • Oracle: Insertion on an indexed table, avoiding duplicates. Looking for tips and advice.

    - by Tom
    Hi everyone, Im looking for the best solution (performance wise) to achieve this. I have to insert records into a table, avoiding duplicates. For example, take table A Insert into A ( Select DISTINCT [FIELDS] from B,C,D.. WHERE (JOIN CONDITIONS ON B,C,D..) AND NOT EXISTS ( SELECT * FROM A ATMP WHERE ATMP.SOMEKEY = A.SOMEKEY ) ); I have an index over A.SOMEKEY, just to optimize the NOT EXISTS query, but i realize that inserting on an indexed table will be a performance hit. So I was thinking of duplicating Table A in a Global Temporary Table, where I would keep the index. Then, removing the index from Table A and executing the query, but modified Insert into A ( Select DISTINCT [FIELDS] from B,C,D.. WHERE (JOIN CONDITIONS ON B,C,D..) AND NOT EXISTS ( SELECT * FROM GLOBAL_TEMPORARY_TABLE_A ATMP WHERE ATMP.SOMEKEY = A.SOMEKEY ) ); This would solve the "inserting on an index table", but I would have to update the Global Temporary A with each insertion I make. I'm kind of lost here, Is there a better way to achieve this? Thanks in advance,

    Read the article

  • Where and how to start on C# and .Net Framework ?

    - by Rachel
    Currently, I have been working as an PHP developer for approximately 1 year now and I want learn about C# and .Net Framework, I do not have any experience with .Net Framework and C# and also there is not firm basis as to why I am going for C# and .Net Framework vs Java or any other programming languages, this decision is mere on career point of view and job opportunities. So my question is about: Is my decision wise to go for C# and .Net Framework route after working for sometime as an PHP Developer ? What are the good resources which I can refer and learn from to get knowledge on C# and .NET Framework ? How should I go about learning on C# and .NET Framework ? What all technologies should I be learning OR have experience with to be considered as an C#/.Net Developer, I am mentioning some technologies, please add or suggest one, if am missing out any ? Technologies C#-THE LANGUAGE GUI APPLICATION DEVELOPMENT WINDOWS CONTROL LIBRARY DELEGATES DATA ACCESS WITH ADO.NET MULTI THREADING ASSEMBLIES WINDOWS SERVICES VB INTRODUCTION TO VISUAL STUDIO .NET WINDOWS CONTROL LIBRARY DATA ACCESS WITH ADO.NET ASP.NET WEB TECHNOLOGIES CONTROLS VALIDATION CONTROL STATE MANAGEMENT CACHING ASP.NET CONFIGURATION ADO.NET ASP.NET TRACING & SECURITY IN ASP.NET XMLPROGRAMMING WEB SERVICES CRYSTAL REPORTS SSRS (SQL Server Reporting Services) MS-Reports LINQ: NET Language-Integrated Query NET Language-Integrated Query LINQ to SQL: SQL Integration WCF: Windows Communication Foundation What Is Windows Communication Foundation? Fundamental Windows Communication Foundation Concepts Windows Communication Foundation Architecture WPF: Windows Presentation Foundation Getting Started (WPF) Application Development WPF Fundamentals What are your thoughts, suggestions on this and from Job and Market Perspective, what areas of C#/.Net Development should I put my focus on ? I know this is very subjective and long question but advice would be highly appeciated. Thanks.

    Read the article

  • Flexible CMS for non-programmers

    - by Bunkerbewohner
    Hello! I'm looking for a content management system that allows creating single pages out of predefined blocks flexibly. For example I have a "product" block that is used to show producs on a page and it may appear numerous times on one page with different contents. But I also might wanna use it on different pages. Also I have simply generic blocks like multiple column text blocks (1 col, 2 col etc.) where I just want to insert this kind of structure into the page and enter any text. So I'm looking for a cms with someething like a building block / module concept for contents. I'm already searching the web but there are so many CMSs that I can't look into every one. So if anyone knows a solution that might be right for me, please tell me! Technology-wise it just has to run on Linux. If it's OpenSource / free that's great, but I might also pay for it, if it offers the features I want. Thanks for any hints in advance!

    Read the article

  • Can't get automated release working with Hudson + Git + Maven Release Plugin

    - by Christopher Maier
    As the title says, I'm trying to get an automated release job working on Hudson. It's a Maven project, and all the code is in Git. Manually, I do the release on my personal machine like so: git checkout master mvn -B release:prepare release:perform This works perfectly. The Maven release plugin properly pushes the release tag to the origin repository as well as the next commit that bumps the version to the next SNAPSHOT. However, when I run this same Maven job through Hudson (either by creating my own "release" job or by using the M2 Release Plugin) it doesn't work so well. The release tag gets pushed out to the origin repository, and the release gets pushed out to our Nexus repository, but the subsequent commit that bumps the version to the next SNAPSHOT doesn't go out. Furthermore, the "master" branch in the origin repository doesn't get changed at all. I've looked in Hudson's workspace for the job, however, and the version has been updated. After looking at the output from the Hudson job, it appears that the Git plugin does not actually checkout "master", but rather it's SHA1 id. That is, if the "master" branch label points to commit "f6af76f541f1a1719e9835cdb46a183095af6861", Hudson does git checkout -f f6af76f541f1a1719e9835cdb46a183095af6861 instead of git checkout -f master As a result, the changes that the Maven release plugin is making are not actually on any branch (certainly not on "master") and these changes don't make it to the origin repository. It runs on the right code, but bookkeeping-wise, the changes seem to get lost because no branch label points to them. Has anybody gotten the Hudson + Git + Maven Release Plugin combo to work properly? Is there some additional configuration somewhere I can set to make this happen? Or is this a bug in the Hudson Git plugin? Thanks in advance.

    Read the article

  • What is the proper way to URL encode Unicode characters?

    - by Josh Gibson
    I know of the non-standard %uxxxx scheme but that doesn't seem like a wise choice since the scheme has been rejected by the W3C. Some interesting examples: The heart character. If I type this into my browser: http://www.google.com/search?q=? Then copy and paste it, I see this URL http://www.google.com/search?q=%E2%99%A5 which makes it seem like Firefox (or Safari) is doing this. urllib.quote_plus(x.encode("latin-1")) '%E2%99%A5' which makes sense, except for things that can't be encoded in Latin-1, like the triple dot character. … If I type the URL http://www.google.com/search?q=… into my browser then copy and paste, I get http://www.google.com/search?q=%E2%80%A6 back. Which seems to be the result of doing urllib.quote_plus(x.encode("utf-8")) which makes sense since … can't be encoded with Latin-1. But then its not clear to me how the browser knows whether to decode with UTF-8 or Latin-1. Since this seems to be ambiguous: In [67]: u"…".encode('utf-8').decode('latin-1') Out[67]: u'\xc3\xa2\xc2\x80\xc2\xa6' works, so I don't know how the browser figures out whether to decode that with UTF-8 or Latin-1. What's the right thing to be doing with the special characters I need to deal with?

    Read the article

  • How to program critical section for reader-writer systems?

    - by Srinivas Nayak
    Hi, Lets say, I have a reader-writer system where reader and writer are concurrently running. 'a' and 'b' are two shared variables, which are related to each other, so modification to them needs to be an atomic operation. A reader-writer system can be of the following types: rr ww r-w r-ww rr-w rr-ww where [ r : single reader rr: multiple reader w : single writer ww: multiple writer ] Now, We can have a read method for a reader and a write method for a writer as follows. I have written them system type wise. rr read_method { read a; read b; } ww write_method { lock(m); write a; write b; unlock(m); } r-w r-ww rr-w rr-ww read_method { lock(m); read a; read b; unlock(m); } write_method { lock(m); write a; write b; unlock(m); } For multiple reader system, shared variable access doesn't need to be atomic. For multiple writer system, shared variable access need to be atomic, so locked with 'm'. But, for system types 3 to 6, is my read_method and write_method correct? How can I improve? Sincerely, Srinivas Nayak

    Read the article

  • archiving strategies and limitations of data in a table

    - by Samuel
    Environment: Jboss, Mysql, JPA, Hibernate Our web application will be catering to a large amount of users (~ 1,000,000) and there are a lots of child table where user specific data are stored (e.g. personal, health, forum contributions ...). What would be the best practice to archive user & user specific information. [a] Would it be wise to move the archived user & user specific information to their respective tables within the same database (e.g. user_archive, user_forum_comments_archive ...) OR [b] Would you just mark the database entries with a flag in the original table(s) and just query only non archived entries. We have a unique constraint on User.loginid, how do you handle this requirement if the users are archived via 1-[a] (i.e if a user with loginid 'samuel' gets moved into the archive table and if a new user gets added with the same name in the original table, how would you prevent this. What would be the best strategy to address the unique key constraints. We have a requirement to selectively archive records and bring it back if necessary, will you rely on database tools are would you handle this via your persistence APIs exposed by the JPA entity model.

    Read the article

  • Multiple threads modifying a collection in Java??

    - by posdef
    Hi, The project I am working on requires a whole bunch of queries towards a database. In principle there are two types of queries I am using: read from excel file, check for a couple of parameters and do a query for hits in the database. These hits are then registered as a series of custom classes. Any hit may (and most likely will) occur more than once so this part of the code checks and updates the occurrence in a custom list implementation that extends ArrayList. for each hit found, do a detail query and parse the output, so that the classes created in (I) get detailed info. I figured I would use multiple threads to optimize time-wise. However I can't really come up with a good way to solve the problem that occurs with the collection these items are stored in. To elaborate a little bit; throughout the execution objects are supposed to be modified by both (I) and (II). I deliberately didn't c/p any code, as it would be big chunks of code to make any sense.. I hope it make some sense with the description above. Thanks,

    Read the article

  • Setting directory security to allow user and deny all

    - by Rita
    I have winforms app, in which I need to access a secured directory. I'm using impersonation and create WindowsIdentity to access the folder. My problem is writing unit tests to test the directory security; I'd like to a write a code that creates a directory secured to only ONE user, which isn't the current user running the UT (or else the test would be worthless). I know how to add permissions to a certain user, but how can I deny the rest, including admins? (in case the user running the UT is an admin) (will this be a wise thing to do?) DirectoryInfo directoryInfo = new DirectoryInfo(path); DirectorySecurity directorySecurity = directoryInfo.GetAccessControl(); directorySecurity.AddAccessRule(new FileSystemAccessRule("Domain\SecuredUser", FileSystemRights.FullControl, InheritanceFlags.ContainerInherit | InheritanceFlags.ObjectInherit, PropagationFlags.InheritOnly, AccessControlType.Allow)); directorySecurity.RemoveAccessRule(new FileSystemAccessRule("??", FileSystemRights.FullControl, InheritanceFlags.ContainerInherit | InheritanceFlags.ObjectInherit, PropagationFlags.InheritOnly, AccessControlType.Deny)); directoryInfo.SetAccessControl(directorySecurity); This isn't working. I don't know who am I supposed to deny. Domain\Admins, Domain\Administrators, me... No one is being denied, and when I check folder's security - The SecuredUser has access to the folder, but the permissions are not checked, even though I specified FullControl. Basically I want to code this: <authorization> <allow users ="Domain\User" /> <deny users="*" /> </authorization> I was thinking about impersonating UT run with a weak user with no permissions, but this would result in: Impersonate - Run UT - Impersonate - Access folder, and I'm not sure if this is the right design. Help would be greatly appreciated, thank you.

    Read the article

  • Help making userscript work in chrome

    - by Vishal Shah
    I've written a userscript for Gmail Pimp.my.Gmail & i'd like it to be compatible with Google Chrome too. Now i have tried a couple of things, to the best of my Javascript knowledge (which is very weak) & have been successful up-to a certain extent, though im not sure if it's the right way. Here's what i tried, to make it work in Chrome: The very first thing i found is that contentWindow.document doesn't work in chrome, so i tried contentDocument, which works. BUT i noticed one thing, checking the console messages in Firefox and Chrome, i saw that the script gets executed multiple times in Firefox whereas in Chrome it just executes once! So i had to abandon the window.addEventListener('load', init, false); line and replace it with window.setTimeout(init, 5000); and i'm not sure if this is a good idea. The other thing i tried is keeping the window.addEventListener('load', init, false); line and using window.setTimeout(init, 1000); inside init() in case the canvasframe is not found. So please do lemme know what would be the best way to make this script cross-browser compatible. Oh and im all ears for making this script better/efficient code wise (which is sure there is)

    Read the article

  • What java web application framework to use?

    - by frohiky
    One of the main products of my company is an Oracle Forms (and Reports) based application, that "needs" to be re-written in another technology. Why? Users want a more rich interface experience, and we want, preferably, to reduce costs with an open source application server. For this (HUGE) project, we intend to use a java web application framework, keep these points in mind: We have: hundreds of tables on our database (the ORM must be as flexible as possible); some logic which is (and will still be) based on PL/SQL procedures/functions/packages; a lot of CRUDs (the application itself is of an considerable size); a demand to work with/generate documents and workflows; an intranet based user environment; We want: to offer a RIA interface experience; use (if possible) an open source app server; a rapid (as possible) development framework; a somewhat mature framework with a "wise" roadmap (and a considerable community support); a MVC approach combined with JS or GWT widgets (e.g. Vaadin or SmartGWT); Well, in the past weeks I've read a lot of posts, Q&As on stackoverflow, and much more: Wicket, JSF, Tapestry, Grails, GWT, Struts2, Play, Spring, Seam, Echo, .... the list goes on! I've even researched about Alfresco..! The obvious question: Which one to use? At this time, any insight, recommendation, shared experience, advice will be more then welcome!

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >