Search Results

Search found 1341 results on 54 pages for 'factor mystic'.

Page 42/54 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Is Visual Source Safe (The latest Version) really that bad? Why? What's the Best Alternative? Why? [closed]

    - by hanzolo
    Over the years I've constantly heard horror stories, had people say "Real Programmers Dont Use VSS", and so on. BUT, then in the workplace I've worked at two companies, one, a very well known public facing high traffic website, and another high end Financial Services "Web-Based" hosted solution catering to some very large, very well known companies, which is where I currently Reside and everything's working just fine (KNOCK KNOCK!!). I'm constantly interfacing with EXTREMELY Old technology with some of these financial institutions.. OLD LIKE YOU WOULDN'T BELIEVE.. which leads me to the conclusion that if it works "LEAVE IT", and that maybe there's some value in old technology? at least enough value to overrule a rewrite!? right?? Is there something fundamentally flawed with the underlying technology that VSS uses? I have a feeling that if i said "someone said VSS Sucks" they would beg to differ, most likely give me this look like i dont know -ish, and I'd never gain back their respect and my credibility (well, that'll be hard to blow.. lol), BUT, give me an argument that I can take to someone whose been coding for 30 years, that builds Platforms that leverage current technology (.NET 3.5 / SQL 2008 R2 ), write's their own ORM with scaffolding and is able to provide a quality platform that supports thousands of concurrent users on a multi-tenant hosted solution, and does not agree with any benefits from having Source Control Integrated, and yet uses the Infamous Visual Source Safe. I have extensive experience with TFS up to 2010, and honestly I think it's great when a team (beyond developers) can embrace it. I've worked side by side with someone whose a die hard SVN'r and from a purist standpoint, I see the beauty in it (I need a bit more, out of my SS, but it surely suffices). So, why are such smarties not running away from Visual Source Safe? surely if it was so bad, it would've have been realized by now, and I would not be sitting here with this simple old, Check In, Check Out, Version Resistant, Label Intensive system. But here I am... I would love to drop an argument that would be the end all argument, but if it's a matter of opinion and personal experience, there seems to be too much leeway for keeping VSS. UPDATE: I guess the best case is to have the VSS supporters check other people's experiences and draw from that until we (please no) experience the breaking factor ourselves. Until then, i wont be engaging in a discussion to migrate off of VSS.. UPDATE 11-2012: So i was able to convince everyone at my work place that since MS is sun downing Visual Source Safe it might be time to migrate over to TFS. I was able to convince them and have recently upgraded our team to Visual Studio 2012 and TFS 2012. The migration was fairly painless, had to run analyze.exe which found a bunch of errors (not sure they'll ever affect the project) and then manually run the VSSConverter.exe. Again, painless, except it took 16 hours to migrate 5 years worth of everything.. and now we're on TFS.. much more integrated.. much more cooler.. so all in all, VSS served it's purpose for years without hick-up. There were no horror stories and Visual Source Save as source control worked just fine. so to all the nay sayers (me included). there's nothing wrong with using VSS. i wouldnt start a new project with it, and i would definitely consider migrating to TFS. (it's really not super difficult and a new "wizard" type converter is due out any day now so migrating should be painless). But from my experience, it worked just fine and got the job done.

    Read the article

  • Data Formatters temporarily unavailable

    - by iphone newbie
    I'm currently working on an iPhone app. This app has a login screen, also a signup screen. After the user has successfully signed up, I dismiss the signup view, then the app automatically logs in using the created account. After which, the login view is dismissed, showing the main view. I'm trying to modify this by immediately dismissing the login view, since I already have the account details of the user when the signup is successful. Basically, the ideal flow is: after the user successfully signs up, I save the username and password in a singleton class, then dismiss the signup view. When I get to the parent view (which is the login screen), I have a variable that checks if there was a successful signup. If that variable is true, I want to immediately dismiss the login view. However, I come across this error message: Data Formatters temporarily unavailable, will re-try after a 'continue'. (Unknown error loading shared library "/Developer/usr/lib/libXcodeDebuggerSupport.dylib") I'm not really sure why this happens. I have no problems dismissing the login view when I go through the actual login procedure - which of course also dismisses the login view if the user inputs a correct username and password. I'm not exactly sure, but I'm starting to think that the iPhone cannot handle dismissing 2 view controllers almost at the same time. Is it possible that I'm dismissing the login view too quickly? Is that a factor? Is there anyway for me to be able to dismiss 2 view controllers almost simultaneously without coming across this error message?

    Read the article

  • Silverlight and Encryption, how to store/generate they key/iv pair?

    - by cmaduro
    I have a Silverlight app that connects to a php webservice. I want to encrypt the communication between the webservice and the Silverlight client. I'm not relying on SSL. I'm encrypting/decrypting the POST string myself using AES 256bit Key and IV. The big questions then are: How do I generate a random unique key/iv pair in PHP. How do I share this key/iv pair between the web service and silverlight client in a secure way. It seems impossible without having some kind of hard coded key or iv on the client. Which would compromise security. This is a public website, there are no logins. Just the requirement of secure communication. I can hard code the seed for the key/iv (which is hashed with SHA256 with a time stamp salt and then assigned as the key or iv) in PHP source code, that's on the server so that is pretty safe. However on the client the seed for the key/iv pair would be visible, if it is hard coded. Further more using a time stamp as the basis for uniqueness/randomness is definitely not ok, since timestamps are predictable. It does however provide a common factor between the C# code and the PHP code. The only other option that I can think of would be to have a 3rd service involved that provides the key/iv to the Silverlight client, as well as the php webservice. This of course start the cycle anew, with the question of how to store the credentials for accessing the key/iv distribution service on the Silverlight client. Sounds like the solution is then asymmetric encryption, since sensitive data will be viewed only on the administrative back end of the website. Unfortunately Silverlight has no asymmetric encryption classes. The solution? Roll my own Diffie-Hellman key exchange! Plug that key into AES256!

    Read the article

  • Good examples of MapServer / OpenLayers

    - by MarkJ
    I want to convince some clients to use MapServer and OpenLayers. Please can anyone suggest attractive websites to show off the possiblities! The clients will be impressed by: A density map (otherwise known as a heat map, colour-shaded grid coverage, contour plot...). The ability for the user to download the underlying data for the density map, restricted to the area being viewed, in some format such as netCDF. Standard OpenLayers stuff. Zooming, panning, scale bar, overview map... Different base layers. Could be WMS, Google, Bing... Searching for a placename, map is panned to display the place. MapServer.org seems to be down right now :( But from memory their examples didn't have the "wow" factor. The OpenLayers examples demonstrate only one or two features per example - I want something to wow the clients by showing all the capabilities in one example. PS If you have good examples that use some other open source tools, post them by all means. But just JavaScript please: customer says no rich client.

    Read the article

  • Migrating complex SVN branch hierarchy to Mercurial

    - by Christian Hang
    Our team has been using SVN for managing an application of decent size and over time a rather complex hierarchy of branches and tags has built up, which is following the basic standard layout for SVN repositories, but is more nested: |-trunk |-branches | |-releases | | |-releaseA | | `-releaseB | `-features | |-featureX | `-featureY |-tags |-releaseA | |-beta | `-RTP `-releaseB |-beta `-RTP (The feature branches are obviously temporary branches but we have to take them into consideration as it won't be feasible to close all of them at once in the near future) For several reasons but primarily because merges have been becoming an increasing pain, we are considering to switch to Mercurial. The main problem we are currently facing is migrating the existing code base without losing our history. I've tried several migration tools (e.g., yasvn2hg, hg convert and svn2hg) with yasvn2hg being the most promising, but none of them seem to be able to deal with nested hierarchies but they all assume that branches and tags are organized in one flat directory respectively. The choice between named branches or clones as the conversion target of old SVN branches is not a limiting factor in this case, as either solution would be appreciated. We are currently experimenting with both options and how they would fit into our current processes but haven't decided on one yet. I'd obviously be interested in recommendations or experiences with similar setups concerning that issue as well. So, what is the best way to convert a nested SVN branch hierarchy like this to Mercurial? Converting one branch at a time into a separate repository would be quite annoying and I am not sure if that would be the right approach in the first place, depending on how the tools handle historic merges and need to be aware of all other branches?

    Read the article

  • Why is the UpdatePanel Response size changing on alternate requests?

    - by Decker
    We are using UpdatePanel in a small portion of a large page and have noticed a performance problem where IE7 becomes CPU bound and the control within the UpdatePanel takes a long time (upwards of 30 seconds) to render. We also noticed that Firefox does not seem to suffer from these delays. We ran both Fiddler (for IE) and Firebug (for Firefox) and noticed that the real problem lied with the amount of data being returned in update panel responses. Within the UpdatePanel control there is a table that contains a number of ListBox controls. The real problem is that EVERY OTHER TIME the response (from making ListBox selections) alternates from 30K to 430K. Firefox handles the 400+K response in a reasonable amount of time. For whatever reason, IE7 goes CPU bound while it is presumably processing this data. So irrespective of whether or not we should be using an UpdatePanel or not, we'd like to figure out why every other async postback response is larger by a factor of more than 10 than the previous one. When the response is in the 30K range, IE updates the display within a second. On the alternate times, the response time is well over 10 times longer. Any idea why this alternating behavior should be happening with an UpdatePanel?

    Read the article

  • Why use short-circuit code?

    - by Tim Lytle
    Related Questions: Benefits of using short-circuit evaluation, Why would a language NOT use Short-circuit evaluation?, Can someone explain this line of code please? (Logic & Assignment operators) There are questions about the benefits of a language using short-circuit code, but I'm wondering what are the benefits for a programmer? Is it just that it can make code a little more concise? Or are there performance reasons? I'm not asking about situations where two entities need to be evaluated anyway, for example: if($user->auth() AND $model->valid()){ $model->save(); } To me the reasoning there is clear - since both need to be true, you can skip the more costly model validation if the user can't save the data. This also has a (to me) obvious purpose: if(is_string($userid) AND strlen($userid) > 10){ //do something }; Because it wouldn't be wise to call strlen() with a non-string value. What I'm wondering about is the use of short-circuit code when it doesn't effect any other statements. For example, from the Zend Application default index page: defined('APPLICATION_PATH') || define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); This could have been: if(!defined('APPLICATION_PATH')){ define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); } Or even as a single statement: if(!defined('APPLICATION_PATH')) define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); So why use the short-circuit code? Just for the 'coolness' factor of using logic operators in place of control structures? To consolidate nested if statements? Because it's faster?

    Read the article

  • Fitting Gaussian KDE in numpy/scipy in Python

    - by user248237
    I am fitting a Gaussian kernel density estimator to a variable that is the difference of two vectors, called "diff", as follows: gaussian_kde_covfact(diff, smoothing_param) -- where gaussian_kde_covfact is defined as: class gaussian_kde_covfact(stats.gaussian_kde): def __init__(self, dataset, covfact = 'scotts'): self.covfact = covfact scipy.stats.gaussian_kde.__init__(self, dataset) def _compute_covariance_(self): '''not used''' self.inv_cov = np.linalg.inv(self.covariance) self._norm_factor = sqrt(np.linalg.det(2*np.pi*self.covariance)) * self.n def covariance_factor(self): if self.covfact in ['sc', 'scotts']: return self.scotts_factor() if self.covfact in ['si', 'silverman']: return self.silverman_factor() elif self.covfact: return float(self.covfact) else: raise ValueError, \ 'covariance factor has to be scotts, silverman or a number' def reset_covfact(self, covfact): self.covfact = covfact self.covariance_factor() self._compute_covariance() This works, but there is an edge case where the diff is a vector of all 0s. In that case, I get the error: File "/srv/pkg/python/python-packages/python26/scipy/scipy-0.7.1/lib/python2.6/site-packages/scipy/stats/kde.py", line 334, in _compute_covariance self.inv_cov = linalg.inv(self.covariance) File "/srv/pkg/python/python-packages/python26/scipy/scipy-0.7.1/lib/python2.6/site-packages/scipy/linalg/basic.py", line 382, in inv if info>0: raise LinAlgError, "singular matrix" numpy.linalg.linalg.LinAlgError: singular matrix What's a way to get around this? In this case, I'd like it to return a density that's essentially peaked completely at a difference of 0, with no mass everywhere else. thanks.

    Read the article

  • How does lucene index documents?

    - by Mehdi Amrollahi
    Hello, I read some document about Lucene; also I read the document in this link (http://lucene.sourceforge.net/talks/pisa). I don't really understand how Lucene indexes documents and don't understand which algorithms Lucene uses for indexing? On the above link, it says Lucene uses this algorithm for indexing: incremental algorithm: maintain a stack of segment indices create index for each incoming document push new indexes onto the stack let b=10 be the merge factor; M=8 for (size = 1; size < M; size *= b) { if (there are b indexes with size docs on top of the stack) { pop them off the stack; merge them into a single index; push the merged index onto the stack; } else { break; } } How does this algorithm provide optimized indexing? Does Lucene use B-tree algorithm or any other algorithm like that for indexing - or does it have a particular algorithm? Thank you for reading my post.

    Read the article

  • Pixel shader wierd compilation error

    - by ytrewq
    hi, I'm experiencing with shaders a bit and I keep getting this weird compilation error that's driving me crazy! the following pixel shader code snippet: DirectionVector = normalize(f3LightPosition[i] - PixelPos); LightVec = PixelNormal - DirectionVector; // Get the light strenght factor LightStrFactor = float(abs((LightVec.x + LightVec.y + LightVec.z) / 3.0f)); // TEST!!! LightStrFactor = 1.0f; // Add this light to the total light on this pixel LightVal += f4Light[i] * LightStrFactor; works perfectly, but as soon as i remove the "LightStrFactor = 1.0f;" line, i.e. letting 'LightStrFactor ' value be the result of the calculation above, it fails to compile the shader. LightStrFactor is a float LightVal & f4Light[i] are float4 All the rest are float3. my question is, besides why it doesn't compile, is how come DX compiler cares about the value of a float? even if my values are incorrect, shouldn't it be run-time? the shader compilation code is this: /* Compile the bitch */ if (FAILED(D3DXCompileShaderFromFile(fileName, NULL, NULL, "PS_MAIN", "ps_2_0", 0, &this->m_pCode, NULL, &this->m_constantTable))) GraphicException("Failed to compile pixel shader!"); // <-- gets here :( if (FAILED(g_D3dDevice->CreatePixelShader( (DWORD*)this->m_pCode->GetBufferPointer(), &this->m_hPixelShader ))) GraphicException("Failed to create pixel shader!"); this->m_fLoaded = true; any help is appreciated thanks!!! :]

    Read the article

  • GXT Performance Issues

    - by pearl
    Hi All, We are working on a rather complex system using GXT. While everything works great on FF, IE (especially IE6) is a different story (looking at more than 10 seconds until the browser renders the page). I understand that one of the main reasons is DOM manipulation which is a disaster under IE6 (See http://www.quirksmode.org/dom/innerhtml.html). This can be thought to be a generic problem of a front-end Javascript framework (i.e. GWT) but a simple code (see below) that executes the same functionality proofs otherwise. In fact, under IE6 - getSomeGWT() takes 400ms while getSomeGXT() takes 4 seconds. That's a x10 factor which makes a huge different for the user experience !!! private HorizontalPanel getSomeGWT() { HorizontalPanel pointsLogoPanel = new HorizontalPanel(); for (int i=0; i<350; i++) { HorizontalPanel innerContainer = new HorizontalPanel(); innerContainer.add(new Label("some GWT text")); pointsLogoPanel.add(innerContainer); } return pointsLogoPanel; } private LayoutContainer getSomeGXT() { LayoutContainer pointsLogoPanel = new LayoutContainer(); pointsLogoPanel.setLayoutOnChange(true); for (int i=0; i<350; i++) { LayoutContainer innerContainer = new LayoutContainer(); innerContainer.add(new Text("just some text")); pointsLogoPanel.add(innerContainer); } return pointsLogoPanel; } So to solve/mitigate the issue one would need to - a. Reduce the number of DOM manipulations; or b. Replace them with innerHTML. AFAIK, (a) is simply a side effect of using GXT and (b) is only possible with UiBinder which isn't supported yet by GXT. Any ideas? Thanks in advance!

    Read the article

  • Default values for Content Taxonomy fields in Drupal with Hierarchical Select widget

    - by lazysoundsystem
    I'm trying to set the default value for a Content Taxonomy field in a hook_form_alter, but can't pin down the necessary format. I've tried this and many variations: foreach (element_children($form) as $child) { // Set $default_value. if ($form[$child]['tids']) { // This, for Content Taxonomy fields, isn't working: $form[$child]['tids']['#default_value'] = array('value' => $default_value); dsm($form[$child]['tids']['#default_value']); } else { // This, for other fields, is working: $form[$child][0]['#default_value']['value'] = $default_value; } } Can anyone tell me what I'm missing? Edit: In response to Henrik Opel (thanks for getting involved), here is the print out of the relevant field of the form with my changes to the default fields commented out, showing the '#default_value' field I'm trying to influence. It also shows that the option widget I'm using is Hierarchical Select (could this be a factor?). In the dsm() in the code above, the changes to the default value are recognised, but they don't get processed later on. field_name_of_content_taxonomy_field (Array, 3 elements) #tree (Boolean) TRUE #weight (String, 1 characters ) 5 tids (Array, 7 elements) #title (String, 10 characters ) Vocabulary_name #type (String, 19 characters ) hierarchical_select #weight (String, 1 characters ) 5 #config (Array, 15 elements) // 15 elements here #required (String, 1 characters ) 0 #description (String, 0 characters ) #default_value (Array, 0 elements)

    Read the article

  • SQL 2005 indexed queries slower than unindexed queries

    - by uos??
    Adding a seemingly perfectly index is having an unexpectedly adverse affect on a query performance... -- [Data] has a predictable structure and a simple clustered index of the primary key: ALTER TABLE [dbo].[Data] ADD PRIMARY KEY CLUSTERED ( [ID] ) -- My query, joins on itself looking for a certain kind of "overlapping" records SELECT DISTINCT [Data].ID AS [ID] FROM dbo.[Data] AS [Data] JOIN dbo.[Data] AS [Compared] ON [Data].[A] = [Compared].[A] AND [Data].[B] = [Compared].[B] AND [Data].[C] = [Compared].[C] AND ([Data].[D] = [Compared].[D] OR [Data].[E] = [Compared].[E]) AND [Data].[F] <> [Compared].[F] WHERE 1=1 AND [Data].[A] = @A AND @CS <= [Data].[C] AND [Data].[C] < @CE -- Between a range [Data] has about a quarter-million records so far, 10% to 50% of the data satisfies the where clause depending on @A, @CS, and @CE. As is, the query takes 1 second to return about 300 rows when querying 10%, and 30 seconds to return 3000 rows when querying 50% of the data. Curiously, the estimated/actual execution plan indicates two parallel Clustered Index Scans, but the clustered index is only of the ID, which isn't part of the conditions of the query, only the output. ?? If I add this hand-crafted [IDX_A_B_C_D_E_F] index which I fully expected to improve performance, the query slows down by a factor of 8 (8 seconds for 10% & 4 minutes for 50%). The estimated/actual execution plans show an Index Seek, which seems like the right thing to be doing, but why so slow?? CREATE UNIQUE INDEX [IDX_A_B_C_D_E_F] ON [dbo].[Data] ([A], [B], [C], [D], [E], [F]) INCLUDE ([ID], [X], [Y], [Z]); The Data Engine Tuning wizard suggests a similar index with no noticeable difference in performance from this one. Moving AND [Data].[F] <> [Compared].[F] from the join condition to the where clause makes no difference in performance. I need these and other indexes for other queries. I'm sure I could hint that the query should refer to the Clustered Index, since that's currently winning - but we all know it is not as optimized as it could be, and without a proper index, I can expect the performance will get much worse with additional data. What gives?

    Read the article

  • 'memcache-client' problem - app can't load the gem

    - by Max Williams
    Hi all - i'm trying to get memcached and the Interlock plugin working with a new rails app. The weird thing is that they both work fine in another app on the same machine and i can't figure out the difference that's stopping this app. The new app is rails 2.3.4 and the old one is 2.2.2 in case that's a factor. When the app starts, i get a warning from interlock: `install_memcached':Interlock::ConfigurationError: 'memcache-client' client requested but not installed. Try 'sudo gem install memcache-client'. Now, i have memcache-client installed: $> gem list | grep memcache memcache-client (1.7.8) The gem is in /var/lib/gems/1.8, which is in my GEM_PATH variable. On a bit of further investigation, the above error is raised by interlock when it refers to the MemCache class, which doesn't exist and so raises an 'anonymous module' error. So, ultimately, the problem is that MemCache isn't loaded. I have a memcached.yml in my config folder (below) however. I'm stuck - any advice anyone? #contents of config/memcached.yml defaults: namespace: millionaire #sessions: true sessions: false client: memcache-client with_finders: true development: servers: - 127.0.0.1:11211 production: servers: - 127.0.0.1:11211

    Read the article

  • Making Firefox render canvas text the same as CSS text

    - by Dan Forys
    I've been experimenting with the canvas tag and Javascript. I've made a page that takes Tweets from the Twitter public timeline and animates them into view. It works by using a canvas element in the background for the animation. When the animation is complete, it creates a div element with the same text over the top. I do this so that the tweet text is selectable and links are clickable. Now, in Safari, Chrome and even Opera, the canvas text and div text look almost exactly the same. Yet in Firefox, the size of the text is different enough to make it 'jump' at the point it changes into the div. Does anyone know how to make Firefox render the text the same on the canvas element and the div using CSS? Or is this a rendering inconsistency with the engine. I have put the page on my website if you want to see what I mean. Now for the code: The CSS I'm using for rendering the div contains: line-height: 21px; font-weight: 100; font-family: Georgia, "New Century Schoolbook", "Nimbus Roman No9 L", serif; font-size: 20px; For rendering on the canvas I'm using: this.context.font = this.scale + 'px Georgia'; this.context.fillStyle = "white"; this.context.strokeStyle = 'white'; this.context.fillText(this.text, 0, 0); this.context.strokeText(this.text, 0, 0); where this.scale is an animated scale factor that finishes at 20px exactly. So, to recap, I'm using the same font and ending up at the same px size, yet Firefox renders the text differently between Canvas and CSS. (edit) Here's a screenshot example: First line is the text animating in using canvas, second line is the resulting div.

    Read the article

  • php cli script hangs with no messages

    - by julio
    Hi-- I've written a PHP script that runs via SSH and nohup, meant to process records from a database and do stuff with them (eg. process some images, update some rows). It works fine with small loads, up to maybe 10k records. I have some larger datasets that process around 40k records (not a lot, I realize, but it adds up to a lot of work when each record requires the download and processing of up to 50 images). The larger datasets can take days to process. Sometimes I'll see in my debug logs memory errors, which are clear enough-- but sometimes the script just appears to "die" or go zombie on me. My tail of the debug log just stops, with no error messages, the tail of the nohup log ends with no error, and the process is still showing in a ps list, looking like this-- 26075 pts/0 S 745:01 /usr/bin/php ./import.php but no work is getting done. Can anyone give me some ideas on why a process would just quit? The obvious things (like a php script timeout and memory issues) are not a factor, as far as I can tell. Thanks for any tips PS-- this is hosted on a godaddy VDS (not my choice). I am sort of suspecting that godaddy has some kind of limits that might kick in on me despite what overrides I put in the code (such as set_time_limit(0);).

    Read the article

  • video streaming over http in blackberry

    - by ysnky
    hi all, while i was searching video player over http, i found the article which is located at this url; http://www.blackberry.com/knowledgecenterpublic/livelink.exe/fetch/2000/348583/800332/1089414/Stream ing_media_-_Start_to_finish.html?nodeid=2456737&ve rnum=0 i can run by adding ";deviceside=true" at the end of url. it works fine in the jde4.5 simulator. it gets 3gp videos from my local server. i tested with 580kb files and works fine. but when i get the same file from my server (not local, real server) i have problems with big files (e.g 580 kb). it plays 180kb files (but sometimes it does not play this file either) but not plays 580kb file. and also i deployed my application to my 9000 device it sometimes plays small file (180kb) but never plays big file (580kb). why it plays if it is on my local file, not play in real world? i ve stucked for days. hope you help me. and also the code at the url given below is not work, the only code i ve found is the above. blackberry.com/knowledgecenterpublic/livelink.exe/fetch/2000/348583/800332/1089414/How_To _-_Play_video_within_a_BlackBerry_smartphone_appli cation.html?nodeid=1383173&vernum=0 btw, there is no method such as resize(long param) of CircularByteBuffer class. so i comment relavent line (buffer.resize(buffer.getSize() + (buffer.getSize() * percent / 100)); as shown below. public void increaseBufferCapacity(int percent) { if(percent < 0){ log(0, "FAILED! SP.setBufferCapacity() - " + percent); throw new IllegalArgumentException("Increase factor must be positive.."); } synchronized(readLock){ synchronized(connectionLock){ synchronized(userSeekLock){ synchronized(mediaIStream){ log(0, "SP.setBufferCapacity() - " + percent); //buffer.resize(buffer.getSize() + (buffer.getSize() * percent / 100)); this.bufferCapacity = buffer.getSize(); } } } } } thanks in advance.

    Read the article

  • What programming languages do the top tier Universities teach?

    - by Simucal
    I'm constantly being inundated with articles and people talking about how most of today's Universities are nothing more than Java vocational schools churning out mediocre programmer after mediocre programmer. Our very own Joel Spolsky has his famous article, "The Perils of Java Schools." Similarly, Alan Kay, a famous Computer Scientist (and SO member) has said this in the past: "I fear — as far as I can tell — that most undergraduate degrees in computer science these days are basically Java vocational training." - Alan Kay (link) If the languages being taught by the schools are considered such a contributing factor to the quality of the school's program then I'm curious what languages do the "top-tier" computer science schools teach (MIT, Carnegie Mellon, Stanford, etc)? If the average school is performing so poorly due in large part the languages (or lack of) that they teach then what languages do the supposed "good" cs programs teach that differentiate them? If you can, provide the name of the school you attended, followed by a list of the languages they use throughout their coursework. Edit: Shog-9 asks why I don't get this information directly from the schools websites themselves. I would, but many schools websites don't discuss the languages they use in their class descriptions. Quite a few will say, "using high-level languages we will...", without elaborating on which languages they use. So, we should be able to get a pretty accurate list of languages taught at various well known institutions from the various SO members who have attended at them.

    Read the article

  • Django - Override admin site's login form

    - by TrojanCentaur
    I'm currently trying to override the default form used in Django 1.4 when logging in to the admin site (my site uses an additional 'token' field required for users who opt in to Two Factor Authentication, and is mandatory for site staff). Django's default form does not support what I need. Currently, I've got a file in my templates/ directory called templates/admin/login.html, which seems to be correctly overriding the template used with the one I use throughout the rest of my site. The contents of the file are simply as below: # admin/login.html: {% extends "login.html" %} The actual login form is as below: # login.html: {% load url from future %}<!DOCTYPE html> <html> <head> <title>Please log in</title> </head> <body> <div id="loginform"> <form method="post" action="{% url 'id.views.auth' %}"> {% csrf_token %} <input type="hidden" name="next" value="{{ next }}" /> {{ form.username.label_tag }}<br/> {{ form.username }}<br/> {{ form.password.label_tag }}<br/> {{ form.password }}<br/> {{ form.token.label_tag }}<br/> {{ form.token }}<br/> <input type="submit" value="Log In" /> </form> </div> </body> </html> My issue is that the form provided works perfectly fine when accessed using my normal login URLs because I supply my own AuthenticationForm as the form to display, but through the Django Admin login route, Django likes to supply it's own form to this template and thus only the username and password fields render. Is there any way I can make this work, or is this something I am just better off 'hard coding' the HTML fields into the form for?

    Read the article

  • Linux Lightweight Distro and X Windows for Development

    - by Fernando Barrocal
    Heyall... I want to build a lightweight linux configuration to use for development. The first idea is to use it inside a Virtual Machine under Windows, or old Laptops with 1Gb RAM top. Maybe even a distributable environment for developers. So the whole idea is to use a LAMP server, Java Application Server (Tomcat or Jetty) and X Windows (any Window manager, from FVWM to Enlightment), Eclipse, maybe jEdit and of course Firefox. Edit: I am changing this post to compile a possible list of distros and window managers that can be used to configure a real lightweight development environment. I am using as base personal experiences on this matter. Info about the distros can be easily found in their sites. So please, focus on personal use of those systems Distros Ubuntu / Xubuntu Pros: Personal Experience in old systems or low RAM environment - @Schroeder, @SCdF Several sugestions based on personal knowledge - @Kyle, @Peter Hoffmann Gentoo Pros: Not targeted to Desktop Users - @paan Don't come with a huge ammount of applications - @paan Slackware Pros: Suggested as best performance in a wise install/configuration - @Ryan Damn Small Linux Pros: Main focus is the lightweight factor - 50MB LiveCD - @Ryan Debian Pros: Very versatile, can be configured for both heavy and lightweight computers - @Ryan APT as package manager - @Kyle Based on compatibility and usability - @Kyle -- Fell Free to add Prós and Cons on this, so we can compile a good Reference. -- X Windows suggestion keep coming about XFCE. If others are to add here, open a session for it Like the distro one :)

    Read the article

  • Fuzzy Search on Material Descriptions including numerical sizes & general descriptions of material t

    - by Kyle
    We're looking to provide a fuzzy search on an electrical materials database (i.e. conduit, cable, etc.). The problem is that, because of a lack of consistency across all material types, we could not split sizes into separate fields from the text description because some materials are rated by things other than size. I've attempted a combination of a full text search & a SQL CLR implementation of the Levenshtein search algorithm (for assistance in ranking), but my results are a little funky (i.e. they are not sorting correctly due to improper ranking). For example, if the search term is "3/4" ABCD Conduit", I'll might get back several irrelevant results in the following order: 1/2" Conduit 1/4" X 3/4" Cable 1/4" Cable Ties 3/4" DFC Conduit Tees 3/4" ABCD Conduit 3/4" Conduit I believe I've nailed the problem down to the fact that these two search algorithms do not factor in the relevance of punctuation & numeric. That is, in such a search, I'd expect the size to take precedence over any fuzzy match on the rest of the description, but my results don't reflect that. My question is: Can anyone recommend better search algorithms or different approaches that may be better suited for searching a combination of alphanumerics & punctuation characters?

    Read the article

  • Anything speaking against the bitnami.org Ruby/Rails/Redmine Stack?

    - by Pekka
    I am looking to set up a Redmine server on a Windows virtual machine on my local workstation. (Background in this related question.) I have zero knowledge of Ruby nor Rails, and while Redmine may be the opportunity to dip into those platforms somewhat, my first goal is to get it running as quickly and easily as possible. For that, I am eyeing the Bitnami Redmine Package. It promises point-and-click install, and a self-contained environment with everything you need. Apart from the learning factor, are there any serious limitations this method implies? Any serious cutdowns in customizability? I will be wanting to customize the template right away, for example, and install plugins. The package looks o.k. to me but before I install it, I was curious to know whether anybody would advise against it and why. Edit: The first impression is great. From 0 to a working Redmine installation in twelve minutes! Wow.

    Read the article

  • R + Bioconductor : combining probesets in an ExpressionSet

    - by Mike Dewar
    Hi, First off, this may be the wrong Forum for this question, as it's pretty darn R+Bioconductor specific. Here's what I have: library('GEOquery') GDS = getGEO('GDS785') cd4T = GDS2eSet(GDS) cd4T <- cd4T[!fData(cd4T)$symbol == "",] Now cd4T is an ExpressionSet object which wraps a big matrix with 19794 rows (probesets) and 15 columns (samples). The final line gets rid of all probesets that do not have corresponding gene symbols. Now the trouble is that most genes in this set are assigned to more than one probeset. You can see this by doing gene_symbols = factor(fData(cd4T)$Gene.symbol) length(gene_symbols)-length(levels(gene_symbols)) [1] 6897 So only 6897 of my 19794 probesets have unique probeset - gene mappings. I'd like to somehow combine the expression levels of each probeset associated with each gene. I don't care much about the actual probe id for each probe. I'd like very much to end up with an ExpressionSet containing the merged information as all of my downstream analysis is designed to work with this class. I think I can write some code that will do this by hand, and make a new expression set from scratch. However, I'm assuming this can't be a new problem and that code exists to do it, using a statistically sound method to combine the gene expression levels. I'm guessing there's a proper name for this also but my googles aren't showing up much of use. Can anyone help?

    Read the article

  • Project Euler (P14): recursion problems

    - by sean mcdaid
    Hi I'm doing the Collatz sequence problem in project Euler (problem 14). My code works with numbers below 100000 but with numbers bigger I get stack over-flow error. Is there a way I can re-factor the code to use tail recursion, or prevent the stack overflow. The code is below: import java.util.*; public class v4 { // use a HashMap to store computed number, and chain size static HashMap<Integer, Integer> hm = new HashMap<Integer, Integer>(); public static void main(String[] args) { hm.put(1, 1); final int CEILING_MAX=Integer.parseInt(args[0]); int len=1; int max_count=1; int max_seed=1; for(int i=2; i<CEILING_MAX; i++) { len = seqCount(i); if(len > max_count) { max_count = len; max_seed = i; } } System.out.println(max_seed+"\t"+max_count); } // find the size of the hailstone sequence for N public static int seqCount(int n) { if(hm.get(n) != null) { return hm.get(n); } if(n ==1) { return 1; } else { int length = 1 + seqCount(nextSeq(n)); hm.put(n, length); return length; } } // Find the next element in the sequence public static int nextSeq(int n) { if(n%2 == 0) { return n/2; } else { return n*3+1; } } }

    Read the article

  • Pros/Cons of MySQL vs Postgresql for production Ruby on Rails environment?

    - by cakeforcerberus
    I will soon be switching from sqlite3 to either postgres or mysql. What should I consider when making this decision? Is mysql more suited for Rails than postgres in some areas and/or vice versa? Or, as I somewhat suspect, does it not really matter either way? Another factor that might play into my decision is the availability of tools to data pump my test data from the sqlite3 db to my new one. Is there anything that ActiveRecord provides natively to do this or any decent plugins/gems to help with this task? BONUS: How do I pronounce "Postgresql" and sound like I know what I'm talking about? :) Thanks Greg Smith for providing the following link that shows the most common pronunciations: http://www.postgresql.org/community/survey.33 UPDATE: Reference this question for more: http://stackoverflow.com/questions/110927/do-you-recommend-postgresql-over-mysql FYI: I ended up using MySQL. There is a neat plugin called yamldb that really saved me some time with the data transfer from my sqlite db to my new mysql one. Instructions on how to install and use it can be found here: http://accidentaltechnologist.com/ruby/change-databases-in-rails-with-yamldb/ Thanks Tom

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >