Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 379/457 | < Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >

  • Using CSS max-height on an outer div to force scroll on an inner-div.

    - by Jay Neely
    I have an outer div with a variable height (and max-height) that's set with a specific pixel amount by JavaScript, containing two divs within. The 1st div is intended to hold a variable amount of content, e.g. a list of links. It has no height set. The 2nd div is intended to hold a fixed amount of content, and has a specific height set. Right now, the max-height isn't working. The 1st div keeps growing, even with overflow: auto; set, and pushes the 2nd div below it outside the bounds of the outer div. How can I make it so that when the 1st div gets too large for the outer div to contain both it and the fixed-height 2nd div, the 1st div will start to scroll? Example page: http://thevastdesign.com/scrollTest.html Thanks for any help. I'd appreciate a CSS solution the most, even if it requires some hacks. It only has to work in Firefox 3+, IE8, and IE7. Ideas?

    Read the article

  • Looking for a fast, compact, streamable, multi-language, strongly typed serialization format

    - by sanity
    I'm currently using JSON (compressed via gzip) in my Java project, in which I need to store a large number of objects (hundreds of millions) on disk. I have one JSON object per line, and disallow linebreaks within the JSON object. This way I can stream the data off disk line-by-line without having to read the entire file at once. It turns out that parsing the JSON code (using http://www.json.org/java/) is a bigger overhead than either pulling the raw data off disk, or decompressing it (which I do on the fly). Ideally what I'd like is a strongly-typed serialization format, where I can specify "this object field is a list of strings" (for example), and because the system knows what to expect, it can deserialize it quickly. I can also specify the format just by giving someone else its "type". It would also need to be cross-platform. I use Java, but work with people using PHP, Python, and other languages. So, to recap, it should be: Strongly typed Streamable (ie. read a file bit by bit without having to load it all into RAM at once) Cross platform (including Java and PHP) Fast Free (as in speech) Any pointers?

    Read the article

  • Why does Tex/Latex not speed up in subsequent runs?

    - by Debilski
    I really wonder, why even recent systems of Tex/Latex do not use any caching to speed up later runs. Every time that I fix a single comma*, calling Latex costs me about the same amount of time, because it needs to load and convert every single picture file. (* I know that even changing a tiny comma could affect the whole structure but of course, a well-written cache format could see the impact of that. Also, there might be situations where 100% correctness is not needed as long as it’s fast.) Is there something in the language of Tex which makes this complicated or impossible to accomplish or is it just that in the original implementation of Tex, there was no need for this (because it would have been slow anyway on those large computers)? But then on the other hand, why doesn’t this annoy other people so much that they’ve started a fork which has some sort of caching (or transparent conversion of Tex files to a format which is faster to parse)? Is there anything I can do to speed up subsequent runs of Latex? Except from putting all the stuff into chapterXX.tex files and then commenting them out?

    Read the article

  • How-to display a .gif as background image?

    - by Vito De Tullio
    Hi! I have a javascript "loading" function like this: function splashScreen() { var div = document.createElement('div'); div.appendChild(document.createTextNode("some text")); div.setAttribute("style", "position: fixed; " + "width: 100%; height: 100%; " + "left: 0; top: 0; " + "z-index: 1000; " + "background: #fff url('img/loading.gif') no-repeat center; " + "font-size: x-large; " + "text-align: center; " + "line-height: 3em; " + "opacity: 0.75; " + "filter: alpha(opacity=75); "); document.getElementsByTagName('body')[0].appendChild(div); return true; } I use this function in the form action (onsubmit="return splashScreen()") to show a "rotating logo" while the next page load... The problem is in that "img/loading.gif" and safari (on winXP): in ff and ie I have no problems, and I clearly see the animated gif. In safari I can't see it. If I change the image with a (obviously static) png the image appears... Am I doing something wrong? What's the problem with safari?

    Read the article

  • Is XMLReader a SAX parser, a DOM parser, or neither?

    - by Renesis
    I am testing various methods to read (possibly large, and very often) XML configuration files in PHP. No writing is ever needed. I have two successful implementations, one using SimpleXML (which I know is a DOM parser) and one using XMLReader. I know that a DOM reader must read the whole tree and therefore uses more memory. My tests reflect that. I also know that A SAX parser is an "event-based" parser that uses less memory because it reads each node from the stream without checking what is next. XMLReader also reads from a stream with the cursor providing data about the node it is currently at. So, it definitely sounds like XMLReader (http://us2.php.net/xmlreader) is not a DOM parser, but my question is, is it a SAX parser, or something else? It seems like XMLReader behaves the way a SAX parser does but does not throw the events themselves (in other words, can you construct a SAX parser with XMLReader?) If it is something else, does the classification it's in have a name?

    Read the article

  • Destructors not called when native (C++) exception propagates to CLR component

    - by Phil Nash
    We have a large body of native C++ code, compliled into DLLs. Then we have a couple of dlls containing C++/CLI proxy code to wrap the C++ interfaces. On top of that we have C# code calling into the C++/CLI wrappers. Standard stuff, so far. But we have a lot of cases where native C++ exceptions are allowed to propagate to the .Net world and we rely on .Net's ability to wrap these as System.Exception objects and for the most part this works fine. However we have been finding that destructors of objects in scope at the point of the throw are not being invoked when the exception propagates! After some research we found that this is a fairly well known issue. However the solutions/ workarounds seem less consistent. We did find that if the native code is compiled with /EHa instead of /EHsc the issue disappears (at least in our test case it did). However we would much prefer to use /EHsc as we translate SEH exceptions to C++ exceptions ourselves and we would rather allow the compiler more scope for optimisation. Are there any other workarounds for this issue - other than wrapping every call across the native-managed boundary in a (native) try-catch-throw (in addition to the C++/CLI layer)?

    Read the article

  • What languages should a microISV use to write commercial software?

    - by Wal
    I've been writing software in Java for many years now, but it was always for internal applications that would be deployed to a server. I'd like to get into writing desktop applications now but I don't know where to start. I've written a few Java/Swing applications but again they were for internal use. My understanding is that Java and other semi-compiled and interpreted languages are too easy to reverse engineer, making them unsuitable for commercial software. I am aware that there are compilers for Java and some other interpreted language, but I've also heard that they are pricey and/or unreliable. Assuming I start a microISV and wish to develop and sell applications to a broad audience, what's my best bet? I would prefer something that can be written close to once, and compiled for different operating systems but I am not opposed to .NET and a Windows-only audience if other languages would compromise the experience (installation ease & user experience) in Windows. My only issue there is that I don't have a large starting budget and paying out the wazoo for the required development tools is not really in the cards.

    Read the article

  • Winsock WSAAsyncSelect sending without an infinite buffer

    - by Xexr
    Hi, This is more of a design question than a specific code question, I'm sure I am missing the obvious, I just need another set of eyes. I am writing a multi-client server based on WSAAsyncSelect, each connection is made into an object of a connection class I have written which contains associated settings and buffers etc. My question concerns FD_WRITE, I understand how it operates: One FD_WRITE is sent immediately after a connection is established. Thereafter, you should send until WSAEWOULDBLOCK is received at which point you store what is left to send in a buffer, and wait to be told that it is ok to send again. This is where I have a problem, how large do I make this holding buffer within each connections object? The amount of time until a new FD_WRITE is received is unknown, I could be attempting to send a lot of stuff during this period, all the time adding to my outgoing buffer. If I make the buffer dynamic, memory usage could spiral out of control if for whatever reason, I am unable to send() and reduce the buffer. So my question is how do you generally handle this situation? Note I am not talking about the network buffer itself which winsock uses, but one of my own creation used to "queue" up sends. Hope I explained that well enough, thanks all!

    Read the article

  • mysql/algorithm: Weighting an average to accentuate differences from the mean

    - by Sai Emrys
    This is for a new feature on http://cssfingerprint.com (see /about for general info). The feature looks up the sites you've visited in a database of site demographics, and tries to guess what your demographic stats are based on that. All my demgraphics are in 0..1 probability format, not ratios or absolute numbers or the like. Essentially, you have a large number of data points that each tend you towards their own demographics. However, just taking the average is poor, because it means that by adding in a lot of generic data, the number goes down. For example, suppose you've visited sites S0..S50. All except S0 are 48% female; S0 is 100% male. If I'm guessing your gender, I want to have a value close to 100%, not just the 49% that a straight average would give. Also, consider that most demographics (i.e. everything other than gender) does not have the average at 50%. For example, the average probability of having kids 0-17 is ~37%. The more a given site's demographics are different from this average (e.g. maybe it's a site for parents, or for child-free people), the more it should count in my guess of your status. What's the best way to calculate this? For extra credit: what's the best way to calculate this, that is also cheap & easy to do in mysql?

    Read the article

  • Need a way to determine if a file is done being written to.

    - by Khorkrak
    The situation I'm in is this - there's a process that's writing to a file, sometimes the file is rather large say 400 - 500MB. I need to know when it's done writing. How can I determine this? If I look in the directory I'll see it there but it might not be done being written. Plus this needs to be done remotely - as in on the same internal LAN but not on the same computer and typically the process that wants to know when the file writing is done is running on a Linux box with a the process that's writing the file and the file itself on a windows box. No samba isn't an option. xmlrpc communication to a service on that windows box is an option as well as using snmp to check if that's viable. Ideally Works on either Linux or Windows - meaning the solution is OS independent. Works for any type of file. Good enough: Works just on windows but can be done through some library or whatever that can be accessed with Python. Works only for PDF files. Current best idea is to periodically open the file in question from some process on the windows box and look at the last bytes checking for the PDF end tag and accounting for the eol differences because the file may have been created on Linux or Windows.

    Read the article

  • The fastest way to iterate through a collection of objects

    - by Trev
    Hello all, First to give you some background: I have some research code which performs a Monte Carlo simulation, essential what happens is I iterate through a collection of objects, compute a number of vectors from their surface then for each vector I iterate through the collection of objects again to see if the vector hits another object (similar to ray tracing). The pseudo code would look something like this for each object { for a number of vectors { do some computations for each object { check if vector intersects } } } As the number of objects can be quite large and the amount of rays is even larger I thought it would be wise to optimise how I iterate through the collection of objects. I created some test code which tests arrays, lists and vectors and for my first test cases found that vectors iterators were around twice as fast as arrays however when I implemented a vector in my code in was somewhat slower than the array I was using before. So I went back to the test code and increased the complexity of the object function each loop was calling (a dummy function equivalent to 'check if vector intersects') and I found that when the complexity of the function increases the execution time gap between arrays and vectors reduces until eventually the array was quicker. Does anyone know why this occurs? It seems strange that execution time inside the loop should effect the outer loop run time.

    Read the article

  • iPhone OS: Get a list of methods and variables from anonymous object

    - by ChrisOPeterson
    I am building my first iPhone/Obj-c app and I have a large amount of data-holding subclasses that I am passing into a cite function. To the cite function these objects are anonymous and I need to find a way to access all the variables of each passed object. I have been using a pre-built NSArray and Selectors to do this but with more than 30 entries (and growing) it is kind of silly to do manually. There has to be a way to dynamically look up all the variables of an anonymous object. The obj-c runtime run-time docs mention this problem but from what I can tell this is not available in iPhone OS. If it is then I don't understand the implementation and need some guidance. A similar question was asked before but again I think they were talking about OSX and not iPhone. Any thoughts? -(NSString*)cite:(id)source { NSString *sourceClass = NSStringFromClass([source class]); // Runs through all the variables in the manually built methodList for(id method in methodList) { SEL x = NSSelectorFromString(method); // further implementation // Should be something like NSArray *methodList = [[NSArray alloc] initWithObjects:[source getVariableList]] for(id method in methodList) { SEL x = NSSelectorFromString(method); // Further implementation }

    Read the article

  • Database Functional Programming in Clojure

    - by Ralph
    "It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." - Abraham Maslow I need to write a tool to dump a large hierarchical (SQL) database to XML. The hierarchy consists of a Person table with subsidiary Address, Phone, etc. tables. I have to dump thousands of rows, so I would like to do so incrementally and not keep the whole XML file in memory. I would like to isolate non-pure function code to a small portion of the application. I am thinking that this might be a good opportunity to explore FP and concurrency in Clojure. I can also show the benefits of immutable data and multi-core utilization to my skeptical co-workers. I'm not sure how the overall architecture of the application should be. I am thinking that I can use an impure function to retrieve the database rows and return a lazy sequence that can then be processed by a pure function that returns an XML fragment. For each Person row, I can create a Future and have several processed in parallel (the output order does not matter). As each Person is processed, the task will retrieve the appropriate rows from the Address, Phone, etc. tables and generate the nested XML. I can use a a generic function to process most of the tables, relying on database meta-data to get the column information, with special functions for the few tables that need custom processing. These functions could be listed in a map(table name -> function). Am I going about this in the right way? I can easily fall back to doing it in OO using Java, but that would be no fun. BTW, are there any good books on FP patterns or architecture? I have several good books on Clojure, Scala, and F#, but although each covers the language well, none look at the "big picture" of function programming design.

    Read the article

  • Rails Binary Stream support

    - by Craig Walker
    I'm going to be starting a project soon that requires support for large-ish binary files. I'd like to use Ruby on Rails for the webapp, but I'm concerned with the BLOB support. In my experience with other languages, frameworks, and databases, BLOBs are often overlooked and thus have poor, difficult, and/or buggy functionality. Does RoR spport BLOBs adequately? Are there any gotchas that creep up once you're already committed to Rails? BTW: I want to be using PostgreSQL and/or MySQL as the backend database. Obviously, BLOB support in the underlying database is important. For the moment, I want to avoid focusing on the DB's BLOB capabilities; I'm more interested in how Rails itself reacts. Ideally, Rails should be hiding the details of the database from me, and so I should be able to switch from one to the other. If this is not the case (ie: there's some problem with using Rails with a particular DB) then please do mention it. UPDATE: Also, I'm not just talking about ActiveRecord here. I'll need to handle binary files on the HTTP side (file upload effectively). That means getting access to the appropriate HTTP headers and streams via Rails. I've updated the question title and description to reflect this.

    Read the article

  • Sparse (Pseudo) Infinite Grid Data Structure for Web Game

    - by Ming
    I'm considering trying to make a game that takes place on an essentially infinite grid. The grid is very sparse. Certain small regions of relatively high density. Relatively few isolated nonempty cells. The amount of the grid in use is too large to implement naively but probably smallish by "big data" standards (I'm not trying to map the Internet or anything like that) This needs to be easy to persist. Here are the operations I may want to perform (reasonably efficiently) on this grid: Ask for some small rectangular region of cells and all their contents (a player's current neighborhood) Set individual cells or blit small regions (the player is making a move) Ask for the rough shape or outline/silhouette of some larger rectangular regions (a world map or region preview) Find some regions with approximately a given density (player spawning location) Approximate shortest path through gaps of at most some small constant empty spaces per hop (it's OK to be a bad approximation often, but not OK to keep heading the wrong direction searching) Approximate convex hull for a region Here's the catch: I want to do this in a web app. That is, I would prefer to use existing data storage (perhaps in the form of a relational database) and relatively little external dependency (preferably avoiding the need for a persistent process). Guys, what advice can you give me on actually implementing this? How would you do this if the web-app restrictions weren't in place? How would you modify that if they were? Thanks a lot, everyone!

    Read the article

  • adapting combination code for larger list

    - by vbNewbie
    I have the following code to generate combinations of string for a small list and would like to adapt this for a large list of over 300 string words.Can anyone suggest how to alter this code or to use a different method. Public Class combinations Public Shared Sub main() Dim myAnimals As String = "cat dog horse ape hen mouse" Dim myAnimalCombinations As String() = BuildCombinations(myAnimals) For Each combination As String In myAnimalCombinations 'Look on the Output Tab for the results! Console.WriteLine("(" & combination & ")") Next combination Console.ReadLine() End Sub Public Shared Function BuildCombinations(ByVal inputString As String) As String() 'Separate the sentence into useable words. Dim wordsArray As String() = inputString.Split(" ".ToCharArray) 'A plase to store the results as we build them Dim returnArray() As String = New String() {""} 'The 'combination level' that we're up to Dim wordDistance As Integer = 1 'Go through all the combination levels... For wordDistance = 1 To wordsArray.GetUpperBound(0) 'Go through all the words at this combination level... For wordIndex As Integer = 0 To wordsArray.GetUpperBound(0) - wordDistance 'Get the first word of this combination level Dim combination As New System.Text.StringBuilder(wordsArray(wordIndex)) 'And all all the remaining words a this combination level For combinationIndex As Integer = 1 To wordDistance combination.Append(" " & wordsArray(wordIndex + combinationIndex)) Next combinationIndex 'Add this combination to the results returnArray(returnArray.GetUpperBound(0)) = combination.ToString 'Add a new row to the results, ready for the next combination ReDim Preserve returnArray(returnArray.GetUpperBound(0) + 1) Next wordIndex Next wordDistance 'Get rid of the last, blank row. ReDim Preserve returnArray(returnArray.GetUpperBound(0) - 1) 'Return combinations to the calling method. Return returnArray End Function End Class

    Read the article

  • Optimzing TSQL code

    - by adopilot
    My job is the maintain one application which heavy use SQL server (MSSQL2005). Until now middle server stores TSQL codes in XML and send dynamic TSQL queries without using stored procs. As I am able change those XML queries I want to migrate most of my queries to stored procs. Question is folowing: Most of my queries have same Where conditions against one table Sample: Select ..... from .... where .... and (a.vrsta_id = @vrsta_id or @vrsta_id = 0) and (a.podvrsta_id = @podvrsta_id or @podvrsta_id = 0) and (a.podgrupa_2 = @podgrupa2_id or @podgrupa2_id = 0) and ( (a.id in (select art_id from osobina_veze where podosobina_id in (select ado from dbo.fn_ado_param_int(@podosobina)) group by art_id having count(art_id)= @podosobina_count )) or ('0' = @podosobina) ) They also have same where conditions on other table. How I should organize my code ? What is proper way ? Should I make table valued function that I will use in all queries or use #Temp tables and simple inner join my query to that each time when proc executing? or use #temp filed by table valued function ? or leave all queries with this large where clause and hope that index is going to do their jobs. or use WITH(statement)

    Read the article

  • What's the best way to retrieve two pieces of data from an XML file?

    - by Morinar
    I've got an XML document that is in either a pre or post FO transformed state that I need to extract some information from. In the pre-case, I need to pull out two tags that represent the pageWidth and pageHeight and in the post case I need to extract the page-height and page-width parameters from a specific tag (I forget which one it is off the top of my head). What I'm looking for is an efficient/easily maintainable way to grab these two elements. I'd like to only read the document a single time fetching the two things I need. I initially started writing something that would use BufferedReader + FileReader, but then I'm doing string searching and it gets messy when the tags span multiple lines. I then looked at the DOMParser, which seems like it would be ideal, but I don't want to have to read the entire file into memory if I could help it as the files could potentially be large and the tags I'm looking for will nearly always be close to the top of the file. I then looked into SAXParser, but that seems like a big pile of complicated overkill for what I'm trying to accomplish. Anybody have any advice? Or simple implementations that would accomplish my goal? Thanks.

    Read the article

  • Should I use MEF or Prism for my Silverlight project?

    - by Daniel
    Hi! My team(3 developers) will be building a Silverlight LOB application. This is the first Silverlight project for us. We've been doing mostly Winforms. We'll be using Silverlight4 / VS2010 / possibly WCF RIA Services, and ASP.NET Web application to handle authentication and host the silverlight pages. We need a way to.. Modularize the silverlight project so we can work in different parts of the application, then integrate them. Dynamically load different parts of the application, so the initial download size of the xap file wouldn't be too large. After some research, I found out that Prism and MEF are possible solutions to these goals. Can you give me advice on which framework to use? or possibly another solution? We don't have much experience on Silverlight and the project needs to be finished in 3 months, so the learning curves for frameworks should be considered. Thank you for reading! Any inputs will be much appreciated.

    Read the article

  • Android application listed as compatible with Sony Xperia S but still filtered from google play

    - by mlidal
    I have published an Android application and some users are complaining that it is listed as not compatible with Sony Xperia S. According to the developer console Xperia S (LT26i) is listed as compatible. Do anyone know of any reason why the app is still filtered from google play? I have seen people reporting problems with big apk files. This app is about 20Mb in size, with the largest file being 14Mb. Quite a bit but not enough to cause problems I think... Here is the output from aapt dump badging: package: name='no.bouvet.nrkut' versionCode='4' versionName='1.0' sdkVersion:'4' targetSdkVersion:'13' uses-permission:'android.permission.ACCESS_FINE_LOCATION' uses-permission:'android.permission.ACCESS_COARSE_LOCATION' uses-permission:'android.permission.ACCESS_WIFI_STATE' uses-permission:'android.permission.ACCESS_NETWORK_STATE' uses-permission:'android.permission.INTERNET' uses-permission:'android.permission.WRITE_EXTERNAL_STORAGE' application-label:'UT.no' application-icon-120:'res/drawable-ldpi/utno_launcher.png' application-icon-160:'res/drawable-mdpi/utno_launcher.png' application-icon-240:'res/drawable-hdpi/utno_launcher.png' application-icon-320:'res/drawable-xhdpi/utno_launcher.png' application: label='UT.no' icon='res/drawable-mdpi/utno_launcher.png' launchable-activity: name='no.bouvet.nrkut.MainActivity' label='UT.no' icon='' uses-feature:'android.hardware.location' uses-feature:'android.hardware.location.gps' uses-feature:'android.hardware.location.network' uses-feature:'android.hardware.wifi' uses-feature:'android.hardware.touchscreen' uses-feature:'android.hardware.screen.portrait' main other-activities search supports-screens: 'small' 'normal' 'large' 'xlarge' supports-any-density: 'true' locales: '--_--' densities: '120' '160' '240' '320'

    Read the article

  • How can I pare down Vim's buffer list to only include active buffers

    - by nelstrom
    How can I pare down my buffer list to only include buffers that are currently open in a window/tab? When I've been running Vim for a long time, the list of buffers revealed by the :ls command is too large to work with. Ideally, I would like to delete all of the buffers which are not currently visible in a tab or window by running a custom command such as :Only. Can anybody suggest how to achieve this? It looks like the :bdelete command can accept a list of buffer numbers, but I'm not sure how to translate the output from :ls to a format that can be consumed by the :bdelete command. Any help would be appreciated. Clarification Lets say that in my Vim session I have opened 4 files. The :ls command outputs: :ls 1 a "abc.c" 2 h "123.c" 3 h "xyz.c" 4 a "abc.h" Buffer 1 is in the current tab, and and buffer 4 is in a separate tab, but buffers 2 and 3 are both hidden. I would like to run the command :Only, and it would wipe buffers 2 and 3, so the :ls command would output: :ls 1 a "abc.c" 4 a "abc.h" This example doesn't make the proposed :Only command look very useful, but if you have a list of 40 buffers it would be very welcome.

    Read the article

  • specifying multiple URLs with cURL/PHP using square brackets

    - by Raj Gundu
    I have a large array of URLS similar to this: $nodes = array( 'http://www.example.com/product.php?page=1&sortOn=sellprice', 'http://www.example.com/product.php?page=2&sortOn=sellprice', 'http://www.example.com/product.php?page=3&sortOn=sellprice' ); The cURL manual states here (http://curl.haxx.se/docs/manpage.html) that i can use square brackets '[]' to specify multiple urls. Used in the above example this would be similar to this: 'http://www.example.com/product.php?page=[1-3]&sortOn=sellprice' So far i have been unable to reference this correctly. This is the complete code segment I'm currently trying to utilize this with: $nodes = array( 'http://www.example.com/product.php?page=1&sortOn=sellprice', 'http://www.example.com/product.php?page=2&sortOn=sellprice', 'http://www.example.com/product.php?page=3&sortOn=sellprice' ); $node_count = count($nodes); $curl_arr = array(); $master = curl_multi_init(); for($i = 0; $i < $node_count; $i++) { $url =$nodes[$i]; $curl_arr[$i] = curl_init($url); curl_setopt($curl_arr[$i], CURLOPT_RETURNTRANSFER, true); curl_multi_add_handle($master, $curl_arr[$i]); } do { curl_multi_exec($master,$running); } while($running > 0); echo "results: "; for($i = 0; $i < $node_count; $i++) { $results = curl_multi_getcontent ( $curl_arr[$i] ); echo( $i . "\n" . $results . "\n"); echo 'done'; I can't seem to find any more documentation on this. Thanks in advance.

    Read the article

  • Legal issues in Europe: check patents ?

    - by Bugz R us
    We live in Europe and are releasing commercial software in multiple countries. Besides of the licensing issues (GPL/LGPL/...) we have a question about patents. I know that if you're in the US, before you release software, you have to check if there aren't any patents you're infringing upon. I also know most of these patents or usually irrational and form a heavy burden on developers/software engineers. Now, as far as I know, EU rules are lots more ratinal, but there has been lobbying to also apply the same rules in EU. http://www.nosoftwarepatents.com http://www.stopsoftwarepatents.eu So what's the deal actually ? For example, there's mention of a patent on a shopping cart : http://v3.espacenet.com/publicationDetails/biblio?CC=EP&NR=807891&KC=&FT=E Is that true ? Is a "shopping cart" patented ??? .................. http://stackoverflow.com/questions/1396191/what-should-every-developer-know-about-legal-matters : 4.Software patent lawsuits are crap shoots. You should not, of course, knowingly violate a software patent. However, there is a small but real chance some company will sue you for violating their patent. This may happen even if you develop your software independently, you never heard of the patent, and the patent covers a technique that is intuitively obvious and almost completely unrelated to your software. There is not a lot you can do to avoid this, given the current USPTO policies, other than buy insurance. The good news is that patent trolls generally sue large companies with lots of money.

    Read the article

  • How can I tackle 'profoundly found elsewhere' syndrome (inverse of NIH)?

    - by Alistair Knock
    How can I encourage colleagues to embrace small-scale innovation within our team(s), in order to get things done quicker and to encourage skills development? (the term 'profoundly found elsewhere' comes from Wikipedia, although it is scarcely used anywhere else apart from a reference to Proctor & Gamble) I've worked in both environments where there is a strong opposition to software which hasn't been developed in-house (usually because there's a large community of developers), and more recently (with far fewer central developers) where off-the-shelf products are far more favoured for the usual reasons: maintenance, total cost over product lifecycle, risk management and so on. I think the off the shelf argument works in the majority of cases for the majority of users, even though as a developer the product never quite does what I'd like it to do. However, in some cases there are clear gaps where the market isn't able to provide specifically what we would need, or at least it isn't able to without charging astronomical consultancy rates for a bespoke solution. These can be small web applications which provide a short-term solution to a particular need in one specific department, or could be larger developments that have the potential to serve a wider audience, both across the organisation and into external markets. The problem is that while development of these applications would be incredibly cheap in terms of developer hours, and delivered very quickly without the need for glacial consultation, the proposal usually falls flat because of risk: 'Who'll maintain the project tracker that hasn't had any maintenance for the past 7 years while you're on holiday for 2 weeks?' 'What if one of our systems changes and the connector breaks?' 'How can you guarantee it's secure/better/faster/cheaper/holier than Company X's?' With one developer behind these little projects, the answers are invariably: 'Nobody, but...' 'It will break, just like any other application would...' 'I, uh...' How can I better answer these questions and encourage people to take a little risk in order to stimulate creativity and fast-paced, short-lifecycle development instead of using that 6 months to consult about what tender process we might use?

    Read the article

  • [LaTeX] breaking an environment across pages - the smart way

    - by Flavius
    Hi I am using the exercise package to display exercises in a book. I have redefined some commands like this, which basically adds some space, a pencil, and two hrule's before and after the exercise: \renewcommand{\ExerciseHeader}{\vskip 1em\hrule\vskip 1em\centerline{\textbf{\large\smallpencil \ExerciseHeaderNB\ExerciseHeaderTitle% \ExerciseHeaderDifficulty\ExerciseHeaderOrigin\medskip}}} \makeatletter\def\endExerciseEnv{\termineliste{1}\@EndExeBox\vskip .5em\hrule\vskip 1em}\makeatother Now this works, but there's a small problem: There are situations where only the \hrule ends up being at the bottom of a page, and the rest of the exercise goes on the next page. There is also the opposite behavior: the entire exercise is on one page, except the \hrule in "endExerciseEnv", which is flushed to the next page. My question is: How to force the top hrule come either together with the header of the exercise (caption, title, whatever not) and at least the first two paragraphs (or \ExePath and a paragraph, or anything like that, but there must be at last "two things", so it doesn't look ugly), OR be flushed altogether, with the entire exercise? Similar question for the bottom hrule: How to force it have at least two items in front of it on the visible page where the hrule itself goes to? Any LaTeX guru who knows that? Addendum I have asked in the past LaTeX questions like this and I've got answers which required me to do stuff manually, like "insert a \vskip here and there" or such. Let me be clear: This is a book, there's lot of exercises, and I NEED it be done "automatically", by going the proper way of redeclaring commands & co.

    Read the article

< Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >