Search Results

Search found 9067 results on 363 pages for 'big fizzy'.

Page 302/363 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • Applying drop shadows to divs

    - by CJD
    Hi everyone, I need a bit of help applying a drop shadow image to a range of DIV elements. The elements in question already have a background image so I am wrapping another DIV around them. Things get complicated further because I'm also using the 960gs CSS framework. This is my current HTML for a content box type display: <div class="grid_12 boxout-shadow-920"> <div class="boxout"> <p>planetCJD.co.uk is the personal site and blog of CJD. The site is still a work-in-progress but please do have a look around and let me know what you think! </p> </div> </div> Boxout CSS: .boxout { background:url("../images/overlay.png") repeat-x scroll 0 0 #EEEEEE; -moz-border-radius:4px 4px 4px 4px; border:1px solid #DDDDDD; margin-bottom:15px; padding:5px; } boxout-shadow-920 CSS: .boxout-shadow-920 { background:url("../images/box-shadow-920.png") no-repeat scroll 50% 101% transparent; } Now this works to a degree. The boxshadow image shows at the bottom of the content box which is what I would like. However as I'm using a fixed percentage of 101%, if the content box height is too small, not much of the drop shadow image gets shown, and if the content box is too big, whitespace starts to appear between the box and the shadow image. So anyway, what I'm looking for is a cross-browser CSS based solution for doing this properly. I'm sure there is an easy answer to this - any help is appreciated!

    Read the article

  • C#; listbox's as one object(container)

    - by Oyeme
    Hello, I use Visual studio 2008 I have 5 listbox's on form,I created a new class file -called him "scaner.cs" scaner.cs -he cannot see "listbox". I have create an instance. scaner Comp = new scaner(listBox2, listBox1, listBox3, listBox4, listBox5); In scaner.cs file I use it like this. class scaner { public ListBox ls; public ListBox lsE; public ListBox lsIVars; public ListBox lsNumbers; public ListBox lsStrings; public scaner(ListBox ls, ListBox lsE, ListBox lsIVars, ListBox lsNumbers, ListBox lsStrings) { this.ls = ls; this.lsE = lsE; this.lsIVars = lsIVars; this.lsNumbers = lsNumbers; this.lsStrings = lsStrings; } } My question : How can i replaced this big code to more "comfortably" method. scaner Comp = new scaner(listBox2, listBox1, listBox3, listBox4, listBox5); IF i had more then 5 listbox's ,it will be awful. How can i acced form another class file "Listbox's" Thanks for answers.

    Read the article

  • Optimized 2D Tile Scrolling in OpenGL

    - by silicus
    Hello, I'm developing a 2D sidescrolling game and I need to optimize my tiling code to get a better frame rate. As of right now I'm using a texture atlas and 16x16 tiles for 480x320 screen resolution. The level scrolls in both directions, and is significantly larger than 1 screen (thousands of pixels). I use glTranslate for the actual scrolling. So far I've tried: Drawing only the on-screen tiles using glTriangles, 2 per square tile (too much overhead) Drawing the entire map as a Display List (great on a small level, way to slow on a large one) Partitioning the map into Display Lists half the size of the screen, then culling display lists (still slows down for 2-directional scrolling, overdraw is not efficient) Any advice is appreciated, but in particular I'm wondering: I've seen Vertex Arrays/VBOs suggested for this because they're dynamic. What's the best way to take advantage of this? If I simply keep 1 screen of vertices plus a bit of overdraw, I'd have to recopy the array every few frames to account for the change in relative coordinates (shift everything over and add the new rows/columns). If I use more overdraw this doesn't seem like a big win; it's like the half-screen display list idea. Does glScissor give any gain if used on a bunch of small tiles like this, be it a display list or a vertex array/VBO Would it be better just to build the level out of large textures and then use glScissor? Would losing the memory saving of tiling be an issue for mobile development if I do this (just curious, I'm currently on a PC)? This approach was mentioned here Thanks :)

    Read the article

  • How to reset the context to the original rectangle after clipping it for drawing?

    - by mystify
    I try to draw a sequence of pattern images (different repeated patterns in one view). So what I did is this, in a loop: CGContextRef context = UIGraphicsGetCurrentContext(); // clip to the drawing rectangle to draw the pattern for this portion of the view CGContextClipToRect(context, drawingRect); // the first call here works fine... but for the next nothing will be drawn CGContextDrawTiledImage(context, CGRectMake(0, 0, 2, 31), [img CGImage]); I think that after I've clipped the context to draw the pattern in the specific rectangle, I cut out a snippet from the big canvas and the next time, my canvas is gone. can't cut out another snippet. So I must reset that clipping somehow in order to be able to draw another pattern again somewhere else? Edit: In the documentation I found this: CGContextClip: "... Therefore, to re-enlarge the paintable area by restoring the clipping path to a prior state, you must save the graphics state before you clip and restore the graphics state after you’ve completed any clipped drawing. ..." Well then, how to store the graphics state before clipping and how to restore it?

    Read the article

  • How can I keep the the logic to translate a ViewModel's values to a Where clause to apply to a linq query out of My Controller?

    - by Mr. Manager
    This same problem keeps cropping up. I have a viewModel that doesn't have any persistent backing. It is just a ViewModel to generate a search input form. I want to build a large where clause from the values the user entered. If the Action Accepts as a parameter SearchViewModel How do I do this without passing my viewModel to my service layer? Service shouldn't know about ViewModels right? Oh and if I serialize it, then it would be a big string and the key/values would be strongly typed. SearchViewModel this is just a snippet. [Display(Name="Address")] public string AddressKeywords { get; set; } /// <summary> /// Gets or sets the census. /// </summary> public string Census { get; set; } /// <summary> /// Gets or sets the lot block sub. /// </summary> public string LotBlockSub { get; set; } /// <summary> /// Gets or sets the owner keywords. /// </summary> [Display(Name="Owner")] public string OwnerKeywords { get; set; } In my controller action I was thinking of something like this. but I would think all this logic doesn't belong in my Controller. ActionResult GetSearchResults(SearchViewModel model){ var query = service.GetAllParcels(); if(model.Census != null){ query = query.Where(x=>x.Census == model.Census); } if (model.OwnerKeywords != null){ query = query.Where(x=>x.Owners == model.OwnerKeywords); } return View(query.ToList()); }

    Read the article

  • ruby on rails basics help

    - by CHID
    Hi, i created a scaffolded application in rails by the name of product. The product_controller.rb file contains the following. class ProductsController ApplicationController def new @product = Product.new respond_to do |format| format.html # new.html.erb format.xml { render :xml = @product } end end def create @product = Product.new(params[:product]) respond_to do |format| if @product.save flash[:notice] = 'Product was successfully created.' format.html { redirect_to(@product) } format.xml { render :xml = @product, :status = :created, :location = @product } else format.html { render :action = "new" } format.xml { render :xml = @product.errors, :status = :unprocessable_entity } end end end Now when the url http://localhost:3000/products/create is given Where new product link is clicked, control is transferred to new definition in the controller class and then an instance variable @product is created. BUT WHERE IS THIS VARIABLE PASSED? The funtion inturn calls new.rhtml which contains <% form_for(@product) do |f| % #all form elments declaration <% f.submit "Create" % <%= end % Here @product is initialized in the controller file and passed to this new.rhtml. So where does form_for(@product) gets the data? How does the control gets tranfered to create function in controller file when submit button is clicked? No where action is specified to the controller file. in the create function, wat does redirec_to(@product) specify where @product is an object received from the new.html file... I am very much confused on the basics of ROR. Somone pls help me clarify this. pardon me for making such a big post. I have lots of doubts in this single piece of code

    Read the article

  • Testing Web Application: "Mirror" ad-hoc testing in another window

    - by Narcissus
    I don't even really know if the title is the best way to explain what I'm trying to do, but anyway... We have a web app that is being ported to a number of DB backends via MDB2. Our unit tests are pretty lacking at the moment, but our internal users are pretty good at knowing what to test to see if things are broken. What I'm 'imagining' is a browser plug in (don't really care which browser it is for) or a similar system that essentially takes every event from one window and 'mirrors' it in the other browser/s. The reason I'd like this is so that I can have various installations that use different DB backends, and have the user open a window/tab to each installation. From there, however, I'd like them to be able to 'work' in one window and have that 'work' I occur at the same time in each of the 'cloned' windows. From there, they should be able to do some quick eyeballing of the information that comes back, without having to worry about timing differences and so (very much). I know it's a big ask, but I figure if anyone knows of a solution, I'd find it here... Any thoughts?

    Read the article

  • Why "Content-Length: 0" in POST requests?

    - by stesch
    A customer sometimes sends POST requests with Content-Length: 0 when submitting a form (10 to over 40 fields). We tested it with different browsers and from different locations but couldn't reproduce the error. The customer is using Internet Explorer 7 and a proxy. We asked them to let their system administrator see into the problem from their side. Running some tests without the proxy, etc.. In the meantime (half a year later and still no answer) I'm curious if somebody else knows of similar problems with a Content-Length: 0 request. Maybe from inside some Windows network with a special proxy for big companies. Is there a known problem with Internet Explorer 7? With a proxy system? The Windows network itself? Google only showed something in the context of NTLM (and such) authentication, but we aren't using this in the web application. Maybe it's in the way the proxy operates in the customer's network with Windows logins? (I'm no Windows expert. Just guessing.) I have no further information about the infrastructure.

    Read the article

  • jQuery - Wait until image loads before performing function

    - by Steven
    I'm trying to create a simple portfolio page. I have a list of thumbs and an image. When you click on a thumb, the image will change. When a thumbnail is clicked, I'd like to have the image fade out, wait until the image is loaded, then fade back in. The problem I have right now is that some of the images are pretty big, so it fades out, then fades back in immediately, sometimes while the image is still loading. I'd like to avoid using setTimeout, since sometimes an image will load faster or slower than the time I set. Here's my code: $(function() { $('img#image').attr("src", $('ul#thumbs li:first img').attr("src")); $('ul#thumbs li img').click(function() { $('img#image').fadeOut(700); var src = $(this).attr("src"); $('img#image').attr("src", src); $('img#image').fadeIn(700); }); }); <img id="image" src="" alt="" /> <ul id="thumbs"> <li><img src="/images/thumb1.png" /></li> <li><img src="/images/thumb2.png" /></li> <li><img src="/images/thumb3.png" /></li> </ul>

    Read the article

  • MySQL PHP | "SELECT FROM table" using "alphanumeric"-UUID. Speed vs. Indexed Integer / Indexed Char

    - by dropson
    At the moment, I select rows from 'table01' using: SELECT * FROM table01 WHERE UUID = 'whatever'; The UUID column is a unique index. I know this isn't the fastest way to select data from the database, but the UUID is the only row-identifier that is available to the front-end. Since I have to select by UUID, and not ID, I need to know what of these two options I should go for, if say the table consists of 100'000 rows. What speed differences would I look at, and would the index for the UUID grow to large, and lag the DB? Get the ID before doing the "big" select 1. $id = "SELECT ID FROM table01 WHERE UUID = '{alphanumeric character}'"; 2. $rows = SELECT * FROM table01 WHERE ID = $id; Or keep it the way it is now, using the UUID. 1. SELECT FROM table01 WHERE UUID '{alphanumeric character}'; Side note: All new rows are created by checking if the system generated uniqueid exists before trying to insert a new row. Keeping the column always unique. The "example" table. CREATE TABLE Table01 ( ID int NOT NULL PRIMARY KEY, UUID char(15), name varchar(100), url varchar(255), `date` datetime ) ENGINE = InnoDB; CREATE UNIQUE INDEX UUID ON Table01 (UUID);

    Read the article

  • How to create a desktop shortcut to a website with website logo as icon?

    - by Wbdvlpr
    Hi I need a solution which allows users to create a desktop shortcut to a website, preferably with website logo as shortcut icon. There are ways users can create shortcut such as drag drop favicon or right click create new shortcut etc. This works but also creates the shortcut with default IE icon on windows. I am trying to avoid this method for this icon and some other reasons. I thought of creating a website-title.URL file, and tell users to download and save file to their desktop, again this works but doesn't solve the icon problem as the website logo (.ico) has to be at local disk and at a pre-specified location [IconFile=path]. I was wondering if it is possible to create a some sort of installer or windows application which users can download from the website. Once its executed by user, it creates this .URL file on user desktop which have IconFile path pre-specified, and copy the website-logo.ico at path specified in .URL file etc. So my questions are - Is it possible to create such a solution? What programming languages can be used to achieve this? Can this installer/app be made to work on non-windows machines as well? From developments point of view how big project it is? if I have to hire a programmer to do this. Please let me know if you need more information. Thanks

    Read the article

  • Android and fairly large SQLite datafiles

    - by SK9
    I'm starting an Android project, a port from an existing iPhone project I've completed. I have a fairly large read-only SQLite database, about 100Mb in all. It's called "mydata.sqlite". Where do I place this in my Eclipse workspace? It's too big for "assets". Next, how do I best get at the file? I would think to try (handling exceptions later) something like: SQLiteDatabase myDatabase = null; myDatabase = SQLiteDatabase.openDatabase(myPath, null, SQLiteDatabase.OPEN_READONLY); But I would then need the path string myPath and since I don't know where to put the resource I don't know what this needs to be. Can I put "mydata.sqlite" into "res/raw" (once I create "raw" in Eclipse?) and then referene it as a resource with "R.raw.mydata"? I would very much appreciate some direct help here, rather than a reference to a tutorial. I have checked tons of these, including those that are already cited here on stackoverflow. I've also gone through the "Notepad" project in the Android developer documents. However these and the documentation typically consider only new, empty or small databases. This should be a simple thing and given the time I've spent already it is perhaps easier to ask. Thanking you kindly in advance for your assistance.

    Read the article

  • database table design

    - by e.b.white
    I design the tables as below for the system which looks like a package delivering system For example, after user received the package, postman should record in system, and the state(history table) is "delivered",and operator is this postman, the current state(state table) is of course "delivered" history table: +---------------+--------------------------+ | Field | Desc | +---------------+--------------------------+ | id | PRIMARY KEY | +---------------+--------------------------+ | package_id | package_tacking_id | +---------------+--------------------------+ | state | package_state | +---------------+--------------------------+ | operators | operators | +---------------+--------------------------+ | create_time| create_time | +---------------+--------------------------+ state table: +---------------+--------------------------+ | Field | Desc | +---------------+--------------------------+ | id | PRIMARY KEY | +---------------+--------------------------+ | package_id | package_tacking_id | +---------------+--------------------------+ | state | latest_package_state | +---------------+--------------------------+ Above is just the basic information to record, some other information( like invoice, destination,...) should be recored as well. But there are different service types like s1 and s2, for s1 it is not needed to record invoice but s1 need, and maybe s1 need some other information to record (like the tel of end user). After all, at delivering way stations there are additional information to record, and for different service type the information type is different. My question is: 1. For different service type, shall I need to declare different tables(option A) or just one big table which can record all information for all types(option B)? 2. If option A, since the basic information above is MUST, how can prevent from declaring there duplicate fields in different tables?

    Read the article

  • skip reading headers in matlab

    - by Paul
    I had a similar question. but what i am trying now is to read files in .txt format into matlab. My problem is with the headers. Manytimes due to errors the system rewrites the headers in the middle of file and then matlab cannot read the file. IS there a way to skip it? I know i can skip reading some characters if i know what the character is. here is the code i am using. [c,pathc]=uigetfile({'*.txt'},'Select the data','V:\data); file=[pathc c]; data= dlmread(file, ',', 1,4); this way i let the user pick the file. My files are huge typically [ 86400 125 ] so naturally it has 125 header fields or more depends on files. Thanks Because the files are so big i cannot copy , but its in format like day time col1 col2 col3 col4 ............................... 2/3/2010 0:10 3.4 4.5 5.6 4.4 ............................... .................................................................. .................................................................. and so on

    Read the article

  • How to speed-up a simple method (preferably without changing interfaces or data structures)?

    - by baol
    I have some data structures: all_unordered_m is a big vector containing all the strings I need (all different) ordered_m is a small vector containing the indexes of a subset of the strings (all different) in the former vector position_m maps the indexes of objects from the first vector to their position in the second one. The string_after(index, reverse) method returns the string referenced by ordered_m after all_unordered_m[index]. ordered_m is considered circular, and is explored in natural or reverse order depending on the second parameter. The code is something like the following: struct ordered_subset { // [...] std::vector<std::string>& all_unordered_m; // size = n >> 1 std::vector<size_t> ordered_m; // size << n std::tr1::unordered_map<size_t, size_t> position_m; const std::string& string_after(size_t index, bool reverse) const { size_t pos = position_m.find(index)->second; if(reverse) pos = (pos == 0 ? orderd_m.size() - 1 : pos - 1); else pos = (pos == ordered.size() - 1 ? 0 : pos + 1); return all_unordered_m[ordered_m[pos]]; } }; Given that: I do need all of the data-structures for other purposes; I cannot change them because I need to access the strings: by their id in the all_unordered_m; by their index inside the various ordered_m; I need to know the position of a string (identified by it's position in the first vector) inside ordered_m vector; I cannot change the string_after interface without changing most of the program. How can I speed up the string_after method that is called billions of times and is eating up about 10% of the execution time?

    Read the article

  • What would happen if a same file being read and appended at the same time(python programming)?

    - by Shane
    I'm writing a script using two separate thread one doing file reading operation and the other doing appending, both threads run fairly frequently. My question is, if one thread happens to read the file while the other is just in the middle of appending strings such as "This is a test" into this file, what would happen? I know if you are appending a smaller-than-buffer string, no matter how frequently you read the file in other threads, there would never be incomplete line such as "This i" appearing in your read file, I mean the os would either do: append "This is a test" - read info from the file; or: read info from the file - append "This is a test" to the file; and such would never happen: append "This i" - read info from the file - append "s a test". But if "This is a test" is big enough(assuming it's a bigger-than-buffer string), the os can't do appending job in one operation, so the appending job would be divided into two: first append "This i" to the file, then append "s a test", so in this kind of situation if I happen to read the file in the middle of the whole appending operation, would I get such result: append "This i" - read info from the file - append "s a test", which means I might read a file that includes an incomplete string?

    Read the article

  • Stop Rewrite htaccess create random pages

    - by Vistol
    Recently I saw in my Webmaster tools that some random sites are linking to my site. Actually this is not an big issue. The issue comes when the pages that are linked are not real pages because of my httaccess file. This is the htaccess code that Im running: <pre> #Options +FollowSymLinks RewriteEngine on RewriteRule ^([^/\.]+)/?$ index.php?id=$1 [L] RewriteRule ^([0-9]+)/(.*)$ index.php?id=$1 [L] </pre> So the real URLs would be: mysite.com/folder/999/TITLE-OR-NAME But cecause I only check the 1st folder ($1) which is an I numberD, this htaccess file is allowing hackers linking to my site with random URLs like: mysite.com/folder/999/TITLE-OR-NAME1 mysite.com/folder/999/TITLE-OR-NAME2 mysite.com/folder/999/TITLE-OR-NAME3 mysite.com/folder/999/TITLE-OR-NAME4 mysite.com/folder/999/TITLE-OR-NAME5 The worst part comes when google tells me that I am duplicating content!!! Actually I am not duplicating content, the htaccess is duplicating it for me. And yes I know, Im a bad newbie programmer but Id really appreciate your help with this cause Im struggling to find a solution but it never. Thank you very much for all your support to this newbie :)

    Read the article

  • Logic: Best way to sample & count bytes of a 100MB+ file

    - by Jami
    Let's say I have this 170mb file (roughly 180 million bytes). What I need to do is to create a table that lists: all 4096 byte combinations found [column 'bytes'], and the number of times each byte combination appeared in it [column 'occurrences'] Assume two things: I can save data very fast, but I can update my saved data very slow. How should I sample the file and save the needed information? Here're some suggestions that are (extremely) slow: Go through each 4096 byte combinations in the file, save each data, but search the table first for existing combinations and update it's values. this is unbelievably slow Go through each 4096 byte combinations in the file, save until 1 million rows of data in a temporary table. Go through that table and fix the entries (combine repeating byte combinations), then copy to the big table. Repeat going through another 1 million rows of data and repeat the process. this is faster by a bit, but still unbelievably slow This is kind of like taking the statistics of the file. NOTE: I know that sampling the file can generate tons of data (around 22Gb from experience), and I know that any solution posted would take a bit of time to finish. I need the most efficient saving process

    Read the article

  • Parallelizing L2S Entity Retrieval

    - by MarkB
    Assuming a typical domain entity approach with SQL Server and a dbml/L2S DAL with a logic layer on top of that: In situations where lazy loading is not an option, I have settled on a convention where getting a list of entities does not also get each item's child entities (no loading), but getting a single entity does (eager loading). Since getting a single entity also gets children, it causes a cascading effect in which each child then gets its children too. This sounds bad, but as long as the model is not too deep, I usually don't see performance problems that outweigh the benefits of the ease of use. So if I want to get a list in which each of the items is fully hydrated with children, I combine the GetList and GetItem methods. So I'll get a list and then loop through it getting each item with the full cascade. Even this is generally acceptable in many of the projects I've worked on - but I have recently encountered situations with larger models and/or more data in which it needs to be more efficient. I've found that partitioning the loop and executing it on multiple threads yields excellent results. In my first experiment with a list of 50 items from one particular project, I did 5 threads of 10 items each and got a 3X improvement in time. Of course, the mileage will vary depending on the project but all else being equal this is clearly a big opportunity. However, before I go further, I was wondering what others have done that have already been through this. What are some good approaches to parallelizing this type of thing?

    Read the article

  • Organizing development teams

    - by Patrick
    A long time ago, when my company was much smaller, dividing the development work over teams was quite easy: the 'application' team developed the applications-specific logic, often requiring a deep insight of specific industry problems) the 'generic' team developed the parts that were common/generic for all applications (user interface related stuff, database access, low-level Windows stuff, ...) Over the years the boundaries between the teams have become fuzzy: the 'application' teams often write application-specific functionality with a 'generic' part, so instead of asking the 'generic' team to write that part for them, they write it themselves to speed up the developments; then donate it to the 'generic' team the 'generic' team's focus seems to be more 'maintenance oriented'. All of the 'very generic' code has already been written, so no new developments are needed in it, but instead they continuously have to support all the functionality donated by the application teams. All this seems to indicate that it's not a good idea anymore to have this split in teams. Maybe the 'generic' team should evolve into a 'software quality' team (defining and guarding the rules for writing good quality software), or into a 'software deployment' team (defining how software should be deployed, installed, ...). How do you split up the work in different teams if you have different applications? everybody can write generic code and donates it to a central 'generic' team? everybody can write generic code, but nobody 'manages' this generic code (everybody is the owner) generic code is written by a 'generic' team only and the applications have to wait until the 'generic' team delivers the generic part (via a library, via a DLL) there is no overlap in code between the different applications some other way? Notice that thee advantage of having the mix (allowing everybody to write everywhere in the code) is that: code is written in a more flexible way it's easier to debug the code since you can easily step into the 'generic' code in the debugger But the big (and maybe only) disadvantage is that this generic code may become nobody's responsibility if there is no clear team that manages it anymore. What is your vision?

    Read the article

  • firefox does not load large size images

    - by Pradeep
    I am stuck with a kind of bug in FF, wherein it’s unable to load images of big size (I have 8 MB size of image) from the server. The loading of image is all fine on IE. I am still looking out for ways to get rid of this problem. I changed server(IIS) settings to allow bigger file sizes. Also, I used “load” event on image using JQuery and tried all sort of options listed here http://api.jquery.com/load-event/, but nothing worked so far. If anyone of you has come across any such similar problem, and a way to resolve it, it would be nice to hear from you Please note: high resolution images are part of the requirement. Code : <style> img { background-color: #FFFFFF; background-image: url(http://eremurus.hyd:8080/QMS/plugin/imagepanner/loader.gif); background-repeat: no-repeat; background-position: center center; } </style> <script src="../plugin/jquery-ui-1.8.7.custom/js/jquery-1.4.4.min.js" type="text/javascript"></script> <script> jQuery(document).ready(function($){ ///var _url = "http://eremurus.hyd:8080/QMS/plugin/imagepanner/floorPlan.jpg"; // set up the node / element _im =$("#main"); //_im.bind("load",function(){ $(this).fadeIn(); }); // set the src attribute now, after insertion to the DOM //_im.attr('src',_url); $("#main").one("load",function(){ alert('loaded'); }) .each(function(){ if(this.complete){ $(this).trigger("load"); } }); }); </script> </head> <body> <div id="target"><img id='main' src="http://eremurus.hyd:8080/QMS/plugin/imagepanner/floorPlan.jpg"> </img></div> </body> </html>

    Read the article

  • EF 4.0 Guid or Int as A primary Key

    - by bigb
    I am Implementing custom ASPNetMembership using EF 4.0 Is there any reason why i should use Guid as a primary key in User tables? As far as i know Int as a PK on SQL Server more performanced than strings. And Int is easier to iterate. Also, for security purpose if i need to pass this it id somewhere in url i may encrypt it somehow and pass it like a strings with no probs. But if i want to use auto generated Guid on SQL Server side using EF 4.0 i need to do this trick http://leedumond.com/blog/using-a-guid-as-an-entitykey-in-entity-framework-4/ I can't see any cases why i should use Guid as PK, may be only one if system going to have millions ans millions users, but also, theoretically, Guid could be duplicated sometime isn't so? Anyway Int32 size is 2,147.483.647 it is pretty much even for very-very big system, but if this number is still not enough I may go with Int64, in that cases I may have 9,223.372.036.854.775.807 rows. Pretty much huh? From another hand, M$ using Guids as PK in their ASPNetMembership implementation. [aspnetdb].[aspnet_Users] - PK UserId Type uniqueidentifier, should be some reasons/explanation why the did it?! May be some one has any ideas/experience about that?

    Read the article

  • Why do the usable sizes differ

    - by Raigex
    Again, dont really know how to phrase the question so I will explain. I have a video recorder application. I open my camera with cameraRecorder = Camera.open(1); //(this is the front facing camera) And get the camera parameters and all supported preview sizes CameraParameters tmpParams = cameraRecorder.getParameters(); List<Camera.Size> tmpList = tmpParams.getSupportedPreviewSizes(); one of the preview sizes on the Galaxy Tab 10.1 running ICS (4.0.4) is 800x600 but when I try to set the Video size in my media Player mediaRecorder.setVideoSize(800,600); I get this error: 12-19 17:27:55.035: E/CameraSource(110): Video dimension (800x600) is unsupported 12-19 17:27:55.035: E/StagefrightRecorder(110): cameraSource do not init 12-19 17:27:55.035: E/StagefrightRecorder(110): setupCameraSource failed. (-19) 12-19 17:27:55.035: E/StagefrightRecorder(110): setupMediaSource is failed. (-19) 12-19 17:27:55.035: E/StagefrightRecorder(110): setupMPEG4Recording is failed. (-19) 12-19 17:27:55.035: E/MediaRecorder(30119): start failed: -19 Does anyone know why this discrepancy might exist (I know one of the supported record sizes is 1280x720 but that is too big for me).

    Read the article

  • How to speed-up a simple method? (possibily without changing interfaces or data structures)

    - by baol
    Hello. I have some data structures: all_unordered_mordered_m is a big vector containing all the strings I need (all different) ordered_m is a small vector containing the indexes of a subset of the strings (all different) in the former vector position_m maps the indexes of objects from the first vector to their position in the second one. The string_after(index, reverse) method returns the string referenced by ordered_m after all_unordered_m[index]. ordered_m is considered circular, and is explored in natural or reverse order depending on the second parameter. The code is something like the following: struct ordered_subset { // [...] std::vector<std::string>& all_unordered_m; // size = n >> 1 std::vector<size_t> ordered_m; // size << n std::map<size_t, size_t> position_m; // positions of strings in ordered_m const std::string& string_after(size_t index, bool reverse) const { size_t pos = position_m.find(index)->second; if(reverse) pos = (pos == 0 ? orderd_m.size() - 1 : pos - 1); else pos = (pos == ordered.size() - 1 ? 0 : pos + 1); return all_unordered_m[ordered_m[pos]]; } }; Given that: I do need all of the data-structures for other purposes; I cannot change them because I need to access the strings: by their id in the all_unordered_m; by their index inside the various ordered_m; I need to know the position of a string (identified by it's position in the first vector) inside ordered_m vector; I cannot change the string_after interface without changing most of the program. How can I speed up the string_after method that is called billions of times and is eating up about 10% of the execution time?

    Read the article

  • short-cutting equality checking in F#?

    - by John Clements
    In F#, the equality operator (=) is generally extensional, rather than intensional. That's great! Unfortunately, it appears to me that F# does not use pointer equality to short-cut these extensional comparisons. For instance, this code: type Z = MT | NMT of Z ref // create a Z: let a = ref MT // make it point to itself: a := NMT a // check to see whether it's equal to itself: printf "a = a: %A\n" (a = a) ... gives me a big fat segmentation fault[*], despite the fact that 'a' and 'a' both evaluate to the same reference. That's not so great. Other functional languages (e.g. PLT Scheme) get this right, using pointer comparisons conservatively, to return 'true' when it can be determined using a pointer comparison. So: I'll accept the fact that F#'s equality operator doesn't use short-cutting; is there some way to perform an intensional (pointer-based) equality check? The (==) operator is not defined on my types, and I'd love it if someone could tell me that it's available somehow. Or tell me that I'm wrong in my analysis of the situation: I'd love that, too... [*] That would probably be a stack overflow on Windows; there are things about Mono that I'm not that fond of...

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >