Search Results

Search found 8893 results on 356 pages for 'stored'.

Page 188/356 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • Helping to Reduce Page Compression Failures Rate

    - by Vasil Dimov
    When InnoDB compresses a page it needs the result to fit into its predetermined compressed page size (specified with KEY_BLOCK_SIZE). When the result does not fit we call that a compression failure. In this case InnoDB needs to split up the page and try to compress again. That said, compression failures are bad for performance and should be minimized.Whether the result of the compression will fit largely depends on the data being compressed and some tables and/or indexes may contain more compressible data than others. And so it would be nice if the compression failure rate, along with other compression stats, could be monitored on a per table or even on a per index basis, wouldn't it?This is where the new INFORMATION_SCHEMA table in MySQL 5.6 kicks in. INFORMATION_SCHEMA.INNODB_CMP_PER_INDEX provides exactly this helpful information. It contains the following fields: +-----------------+--------------+------+ | Field | Type | Null | +-----------------+--------------+------+ | database_name | varchar(192) | NO | | table_name | varchar(192) | NO | | index_name | varchar(192) | NO | | compress_ops | int(11) | NO | | compress_ops_ok | int(11) | NO | | compress_time | int(11) | NO | | uncompress_ops | int(11) | NO | | uncompress_time | int(11) | NO | +-----------------+--------------+------+ similarly to INFORMATION_SCHEMA.INNODB_CMP, but this time the data is grouped by "database_name,table_name,index_name" instead of by "page_size".So a query like SELECT database_name, table_name, index_name, compress_ops - compress_ops_ok AS failures FROM information_schema.innodb_cmp_per_index ORDER BY failures DESC; would reveal the most problematic tables and indexes that have the highest compression failure rate.From there on the way to improving performance would be to try to increase the compressed page size or change the structure of the table/indexes or the data being stored and see if it will have a positive impact on performance.

    Read the article

  • Can someone explain the (reasons for the) implications of colum vs row major in multiplication/concatenation?

    - by sebf
    I am trying to learn how to construct view and projection matrices, and keep reaching difficulties in my implementation owing to my confusion about the two standards for matrices. I know how to multiply a matrix, and I can see that transposing before multiplication would completely change the result, hence the need to multiply in a different order. What I don't understand though is whats meant by only 'notational convention' - from the articles here and here the authors appear to assert that it makes no difference to how the matrix is stored, or transferred to the GPU, but on the second page that matrix is clearly not equivalent to how it would be laid out in memory for row-major; and if I look at a populated matrix in my program I see the translation components occupying the 4th, 8th and 12th elements. Given that: "post-multiplying with column-major matrices produces the same result as pre-multiplying with row-major matrices. " Why in the following snippet of code: Matrix4 r = t3 * t2 * t1; Matrix4 r2 = t1.Transpose() * t2.Transpose() * t3.Transpose(); Does r != r2 and why does pos3 != pos for: Vector4 pos = wvpM * new Vector4(0f, 15f, 15f, 1); Vector4 pos3 = wvpM.Transpose() * new Vector4(0f, 15f, 15f, 1); Does the multiplication process change depending on whether the matrices are row or column major, or is it just the order (for an equivalent effect?) One thing that isn't helping this become any clearer, is that when provided to DirectX, my column major WVP matrix is used successfully to transform vertices with the HLSL call: mul(vector,matrix) which should result in the vector being treated as row-major, so how can the column major matrix provided by my math library work?

    Read the article

  • Facebook contest policy no-no?

    - by Fred
    I would like to post a link on a Facebook page where it will exit Facebook entirely and go to a client's website, where people will be on a page (client's) where they can enter their e-mail address to be entered in a temporary database file with rules and disclosures etc., for a draw once the number of entries reaches 100 for instance. Once the number of entries reaches 100, a random winner is picked and notified via E-mail. The functionality is as follows: A link is place on a Facebook page leading to an external page The page is a form to merely enter their email address for a contest The email is placed in a temporary file An automatic E-mail is sent to the address used for confirmation using SHAH-256 hash The person receives the Email saying something to the affect "Please confirm your Email address etc. - If you did not authorize this, simply ignore this message and no further action will be taken". If the person clicks on the confirmation link, the Email is then stored in the database and the person is again notified saying "Thank you for signing up etc." Once others do the same process and the database reaches a certain number, the form is no longer accessible and automatically picks a random Email. Once picked, an Email is automatically sent to the winner stating the instructions, and notifying me also. Once that person clicks yet another confirmation link, the database is then automatically deleted. I have built this myself and have no intentions of breaking any rules, nor jeopardize the work/time/energy I have put into this project. Is this allowed?

    Read the article

  • How to avoid the GameManager god object?

    - by lorancou
    I just read an answer to a question about structuring game code. It made me wonder about the ubiquitous GameManager class, and how it often becomes an issue in a production environment. Let me describe this. First, there's prototyping. Nobody cares about writing great code, we just try to get something running to see if the gameplay adds up. Then there's a greenlight, and in an effort to clean things up, somebody writes a GameManager. Probably to hold a bunch of GameStates, maybe to store a few GameObjects, nothing big, really. A cute, little, manager. In the peaceful realm of pre-production, the game is shaping up nicely. Coders have proper nights of sleep and plenty of ideas to architecture the thing with Great Design Patterns. Then production starts and soon, of course, there is crunch time. Balanced diet is long gone, the bug tracker is cracking with issues, people are stressed and the game has to be released yesterday. At that point, usually, the GameManager is a real big mess (to stay polite). The reason for that is simple. After all, when writing a game, well... all the source code is actually here to manage the game. It's easy to just add this little extra feature or bugfix in the GameManager, where everything else is already stored anyway. When time becomes an issue, no way to write a separate class, or to split this giant manager into sub-managers. Of course this is a classical anti-pattern: the god object. It's a bad thing, a pain to merge, a pain to maintain, a pain to understand, a pain to transform. What would you suggest to prevent this from happening?

    Read the article

  • Expected time for lazy evaluation with nested functions?

    - by Matt_JD
    A colleague and I are doing a free R course, although I believe this is a more general lazy evaluation issue, and have found a scenario that we have discussed briefly and I'd like to find out the answer from a wider community. The scenario is as follows (pseudo code): wrapper => function(thing) { print => function() { write(thing) } } v = createThing(1, 2, 3) w = wrapper(v) v = createThing(4, 5, 6) w.print() // Will print 4, 5, 6 thing. v = create(7, 8, 9) w.print() // Will print 4, 5, 6 because "thing" has now been evaluated. Another similar situation is as follows: // Using the same function as above v = createThing(1, 2, 3) v = wrapper(v) w.print() // The wrapper function incestuously includes itself. Now I understand why this happens but where my colleague and I differ is on what should happen. My colleague's view is that this is a bug and the evaluation of the passed in argument should be forced at the point it is passed in so that the returned "w" function is fixed. My view is that I would prefer his option myself, but that I realise that the situation we are encountering is down to lazy evaluation and this is just how it works and is more a quirk than a bug. I am not actually sure of what would be expected, hence the reason I am asking this question. I think that function comments could express what will happen, or leave it to be very lazy, and if the coder using the function wants the argument evaluated then they can force it before passing it in. So, when working with lazy evaulation, what is the practice for the time to evaluate an argument passed, and stored, inside a function?

    Read the article

  • How do I make apt-get install commands not display every package that is installing?

    - by rajlego
    When I install something on terminal, it often shows me a few things for status. For one, it shows download rate (which is fine). However, when I install something, it can display Unpacking libgranite2:amd64 (0.3.0~r732+pkg64~ubuntu0.3.1) ... Selecting previously unselected package slingshot-launcher. Preparing to unpack .../slingshot-launcher_0.7.6.1+r421+pkg32~ubuntu0.3.1_amd64.deb ... Unpacking slingshot-launcher (0.7.6.1+r421+pkg32~ubuntu0.3.1) ... Selecting previously unselected package contractor. Preparing to unpack .../contractor_0.3.1~r136+pkg22~ubuntu0.3.1_amd64.deb ... Unpacking contractor (0.3.1~r136+pkg22~ubuntu0.3.1) ... Selecting previously unselected package apport-hooks-elementary. Preparing to unpack .../apport-hooks-elementary_0.1-0~35~saucy1_all.deb ... Unpacking apport-hooks-elementary (0.1-0~35~saucy1) ... Processing triggers for hicolor-icon-theme (0.13-1) ... Processing triggers for libglib2.0-0:amd64 (2.40.0-2) ... Processing triggers for man-db (2.6.7.1-1) ... Setting up libgranite-common (0.3.0~r732+pkg64~ubuntu0.3.1) ... Setting up libgranite2:amd64 (0.3.0~r732+pkg64~ubuntu0.3.1) ... Setting up slingshot-launcher (0.7.6.1+r421+pkg32~ubuntu0.3.1) ... Setting up contractor (0.3.1~r136+pkg22~ubuntu0.3.1) ... Setting up apport-hooks-elementary (0.1-0~35~saucy1) ... Processing triggers for libc-bin (2.19-0ubuntu6) .. I would rather that not show up. I only want to see download rate, not all that other stuff. How do I do this? EDIT: I would also like the jargon to be stored somwehre else if something goes wrong, or for the jargon to just be expanable on terminal.

    Read the article

  • An algorithm for finding subset matching criteria?

    - by Macin
    I recently came up with a problem which I would like to share some thoughts about with someone on this forum. This relates to finding a subset. In reality it is more complicated, but I tried to present it here using some simpler concepts. To make things easier, I created this conceptual DB model: Let's assume this is a DB for storing recipes. Recipe can have many instructions steps and many ingredients. Ingredients are stored in a cupboard and we know how much of each ingredient we have. Now, when we create a recipe, we have to define how much of each ingredient we need. When we want to use a recipe, we would just check if required amount is less than available amount for each product and then decide if we can cook a dinner - if amount required for at least one ingredient is less than available amount - recipe cannot be cooked. Simple sql query to get the result. This is straightforward, but I'm wondering, how should I work when the problem is stated the other way round, i.e. how to find recipies which can be cooked only from ingredients that are available? I hope my explanation is clear, but if you need any more clarification, please ask.

    Read the article

  • how to store and retrieve/generate UI?

    - by thindery
    I'm working on a site that will have hundreds, and eventually thousands, of paper products that users can customize online. Here is a very simple sample of what needs to be generated based on the product id: demo. This is a very simple version. I plan on replacing text fields with prettier elements(like the slider on tab 3). I imagine most of this can be achieved via jquery. So basically a product will have multiple pages(tabs), with multiple form elements on each page. I've never done a large scale project like this before and I am looking for ideas/suggestions for how I can store the info for each product that needs to be generated to create the UI. For each product, I need to store how many pages there are, what form fields are on each page, and the order of the fields on the page. As well as setting default text values and form options(font size, etc). Then with all this info stored somewhere, I can have the web app retrieve it and generate the UI with text fields, sliders, and other jquery-ish form enhancements, for that particular product. Can anyone toss out some suggestions, links, blogs, tutorials? I'm not really sure where to begin with this or what I need to start to investigate. I have experience with php, mysql, javascript, jquery, html, css, and that is really about it. I'm open to learning(and would enjoy exploring) new frameworks, programming, etc that will really get this web app working correctly, efficiently, and effectively. Maybe I should start looking into a mvc framework? like i said, i really have no idea what is the best approach. please let me know your suggestions!

    Read the article

  • Hosting woes

    Unfortunately quite a few people have noticed our recent hosting problems, but if you are reading this they should all be over, so please accept our apologies. Our former web host decided migrate to a new platform, it had all sorts or great features, but on reflection hosting wasn’t one of them. We knew it was coming, and had even been proactive and requested several dates on their migration control panel so I could be around to check it afterwards. The dates came and went without anything happening, so we sat back and carried on on for a couple of months thinking they’d get back to us when they were ready. Then out of the blue I get an email saying it has happened! Now this is what I call timing, I had client work to complete, a 50 minute presentation to write and there was a little conference called SQLBits that I help organise at the end of the week, and then our hosting provider decides to migrate our sites. Unfortunately they only migrated parts of the sites, they forgot things like the database for SQLDTS. The database eventually appeared, but the data didn’t. Then the data pitched up but without the stored procedures. I was even asked if I could perform a backup and send it to them, as they were getting timeout errors. Never mind the issues of performing a native backup on a hosted server, whilst I could have done something, the question actually left me speechless. So you cannot access your own SQL server and you expect me to be able to help? This site was there, but hadn’t been set as an IIS application so all path references were wrong which meant no CSS and all the internal navigation and links were wrong. The new improved hosting platform Control Panel didn't appear to like setting applications. It said it would, you’d have to wait 2 hours of course, then just decided not to bother after all. So needless to say after a very successful SQLBits I focused my attention on finding a new web host, and here we are again. Sorry it took so long.

    Read the article

  • Google Rolls Out Secured Search. It’s Slightly Different From Regular Search

    - by Gopinath
    Google rolled out secured version of it’s search engine at https://google.com (did you notice https instead of http?). This search engine lets everyone to use Google search in a secured way. How is it secured? When you use https://google.com, the data exchanged between your browser and Google servers is encrypted to make sure that no one can sniff it. Is my search history secured from Google? No. The search queries you submit to Google are stored in Google servers. There is no change Google’s search history recording. Any differences between Regular Search and Secured Search Results? Yes. Secured search is slightly different from regular search. When you are accessing Google Secured Search Image search options will not be available on the left side bar. Site may respond slow compared to regular search site as there is a overhead to establish between your browser and the server. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • File Adapter FileName Macros

    - by IntegrationOverload
    I can never find these when I need them...   Macro name Substitute value %datetime% Coordinated Universal Time (UTC) date time in the format YYYY-MM-DDThhmmss (for example, 1997-07-12T103508). %datetime_bts2000% UTC date time in the format YYYYMMDDhhmmsss, where sss means seconds and milliseconds (for example, 199707121035234 means 1997/07/12, 10:35:23 and 400 milliseconds). %datetime.tz% Local date time plus time zone from GMT in the format YYYY-MM-DDThhmmssTZD, (for example, 1997-07-12T103508+800). %DestinationParty% Name of the destination party. The value comes from the message context property BTS.DestinationParty. %DestinationPartyQualifier% Qualifier of the destination party. The value comes from the message context property BTS.DestinationPartyQualifier. %MessageID% Globally unique identifier (GUID) of the message in BizTalk Server. The value comes directly from the message context property BTS.MessageID. %SourceFileName% Name of the file from where the File adapter read the message. The file name includes the extension and excludes the file path, for example, Sample.xml. When substituting this property, the File adapter extracts the file name from the absolute file path stored in the FILE.ReceivedFileName context property. If the context property does not have a value, for example, if a message was received on an adapter other than the File adapter, the macro will not be substituted and will remain in the file name as is (for example, C:\Drop\%SourceFileName%). %SourceParty% Name of the source party from which the File adapter received the message. %SourcePartyQualifier% Qualifier of the source party from which the File adapter received the message. %time% UTC time in the format hhmmss. %time.tz% Local time plus time zone from GMT in the format hhmmssTZD (for example, 124525+530).

    Read the article

  • [LINQ] Master &ndash; Detail Same Record(II)

    - by JTorrecilla
    In my previous post, I introduced my problem, but I didn’t explain the problem with Entity Framework When you try the solution indicated you will take the following error: LINQ to Entities don’t recognize the method 'System.String Join(System.String, System.Collections.Generic.IEnumerable`1[System.String])’ of the method, and this method can’t be translated into a stored expression. The query that produces that error was: 1: var consulta = (from TCabecera cab in 2: contexto_local.TCabecera 3: let Detalle = (from TDetalle detalle 4: in cab.TDetalle 5: select detalle.Nombre) 6: let Nombres = string.Join(",",Detalle ) 7: select new 8: { 9: cab.Campo1, 10: cab.Campo2, 11: Nombres 12: }).ToList(); 13: grid.DataSource=consulta;   Why is this error happening? This error happens when the query couldn’t be translated into T-SQL. Solutions? To quit that error, we need to execute the query on 2 steps: 1: var consulta = (from TCabecera cab in 2: contexto_local.TCabecera 3: let Detalle = (from TDetalle detalle 4: in cab.TDetalle 5: select detalle.Nombre) 6: select new 7: { 8: cab.Campo1, 9: cab.Campo2, 10: Detalle 11: }).ToList(); 12: var consulta2 = (from dato in consulta 13: let Nombes = string.Join(",",dato.Detalle) 14: select new 15: { 16: dato.Campo1, 17: dato.Campo2, 18: Nombres 19: }; 20: grid.DataSource=consulta2.ToList(); Curiously This problem happens with Entity Framework but, the same problem can’t be reproduced on LINQ – To – SQL, that it works fine in one unique step. Hope It’s helpful Best Regards

    Read the article

  • Dual Monitor in Ubuntu 11.10 is resetting the theme

    - by Mengu
    i'm experiencing a strange problem with dual monitors. when i set the dual monitor via "nvidia settings" and save the setting to xorg.conf, the default unity theme and icons are turning back to gtk default. i also get an error telling me "could not apply the stored configuration for monitors" here is my xorg.conf: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 280.13 (buildd@yellow) Fri Aug 5 12:31:28 UTC 2011 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "Chi Mei Optoelectronics corp." HorizSync 30.0 - 75.0 VertRefresh 60.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 540M" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "1" Option "TwinViewXineramaInfoOrder" "DFP-0" Option "metamodes" "DFP: nvidia-auto-select +0+0, CRT: 1680x1050 +1920+0" SubSection "Display" Depth 24 EndSubSection EndSection here is an example of what i'm talking about: http://i.stack.imgur.com/vrlW1.png how can i fix this? thanks in advance.

    Read the article

  • Best practice- handling images on website

    - by Steve
    I am porting an old eCommerce site to MVC 3 and would like to take advantage of design improvements. The site currently has product images stored in 3 sizes: thumbnail, medium (for display in a list) and expanded for a zoomed look. Right now we are having to upload 3 separate images that are sized exactly right, provide 3 different names that match what the site expects, etc., it is a pain. I'd like to upload just 1 file, the large one, then let the site reduce it to needed sizes, and I'd like the flexibility to change the thumbnail and list sizes depending on user preferences, form factor (e.g. mobile, iPad, desktop), etc. so might need many copies of the same image. My question is should the image be reduced then saved several times upon upload and if so what is a good storage/naming convention? The other idea is to store just the single image but resize it programmatically before serving it to the client. Has anybody done this and what are the tradeoffs besides a few more machine cycles? How do you pass a temporary image in memory to the client (there is no URL)?

    Read the article

  • Tomcat 7 vs. ehCache Standalone Server (Glassfish) Configuration with RESTful Web Services

    - by socal_javaguy
    My requirements consist of using ehCache to send and store data via RESTful web service calls. The data can be stored in-memory or via the filesystem... Never used ehCache before so I am having some issues deciding on which bundle to use. Have downloaded the following bundles: ehcache-2.6.2 ehcache-standalone-server-1.0.0 (1) What is the difference between the two? It seems the ehcache-2.6.2 contains src and binaries, which essentially enables one to bundle it with their webapps (by putting the compiled jar or binaries inside the webapp's WEB-INF/lib folder). But it doesn't seem that it has support for Restful web services. Whereas, ehcache-standalone-server-1.0.0 (comes with an embedded Glassfish server and has support for REST & SOAP) can be used to run as a standalone server. If I my answers to my own question are correct, then that means, I should just use the standalone server? (2) My requirements are to setup ehCache (with REST support) on Tomcat 7. So, how could I setup ehCache on Tomcat 7 as a separate app with REST & SOAP support? Thank you for taking the time to read this...

    Read the article

  • How to synchronize a whole Ubuntu?

    - by Avio
    I think that the time is ripe to have my whole Ubuntu synchronized just as my Dropbox folder is. Given that we are always talking about files and directories, what's the difference between my Documents folder and my /usr system directory? Almost none, except for their location. In fact, I think that there is just one big issue that prevents people to have their beloved installations mirrored wherever they go: symlinks. Dropbox, Google Drive, Ubuntu One, Sugarsync, Skydrive, none of these services support symlinking. This means that if I push a symlink in one of the synced folders, locally the symlink is kept as is, but remotely (in the cloud or on the other synced machines) the symlink is resolved to the actual file that was originally pointed to. This completely disrupts Linux installations, thus these services can't be used for this purpose. So the question is. Does anybody knows a way to achieve this? A whole Ubuntu, always synchronized with a remote running copy, but still locally stored on both disks? My best guess is that I could use NFS. But the main difference between Dropbox and NFS is that NFS is a remote filesystem that always forces to remotely access the files, while Dropbox pushes modifcations to local filesystems (and thus would perform better). I've also heard about NFS caching. Does anybody knows if this solution could approximate Dropbox in this sense? P.s. I know that /boot, /dev, /proc, /run, /tmp and device-specific mountpoints in /mnt and /media will have to be left out the sync mechanism. What I'm interested in is the principle. Can this be done with reasonable performance, having reasonable resources (e.g. ~ 1Mbps upload bandwidth and a public IP address)?

    Read the article

  • Multiple passes in direct3d10

    - by innochenti
    I begin to learning direct3d10 and stuck with multiple passes. As input I have a triangle(that stored in vb/ib) and effect file: //some vertex shader and globals goes there. skip them to preserve simplicity float4 ColorPixelShader(PixelInputType input) : SV_Target { return float4(1,0,0,0); } float4 ColorPixelShader1(PixelInputType input) : SV_Target { return float4(0,1,0,0); } technique10 ColorTechnique { pass pass0 { SetVertexShader(CompileShader(vs_4_0, ColorVertexShader())); SetPixelShader(CompileShader(ps_4_0, ColorPixelShader())); SetGeometryShader(NULL); } pass pass1 { SetVertexShader(CompileShader(vs_4_0, ColorVertexShader())); SetPixelShader(CompileShader(ps_4_0, ColorPixelShader1())); SetGeometryShader(NULL); } } And some render code: pass1->Apply(0); device->DrawIndexed(indexCount, 0, 0); pass2->Apply(0); device->DrawIndexed(indexCount, 0, 0); What I'd expect to see is the green triangle, but it always shows me red triangle. What am I doing wrong? Also, I've got another question - should I set vertex shader in every pass? I've added ColorVertexShader1 that translates vertex position by some delta, and 've got following picture: http://imgur.com/Oe7Qj

    Read the article

  • Scaling sprite velocity / co-ordinatesin Android

    - by user22241
    I'm trying to find the answer to a question that I've had for a long time, but am having trouble finding it! I hope someone can help :-) I'm trying to find information on how to scale sprite velocity / movement / co-ordinates. What I mean by this is how do I get a sprite to move at the same speed relative to the screen size / DPI so that it takes the same amount of real-time to get from one side of the screen to the other? All of the posts pertaining to sprite scaling that I can find on the various forums relate to the size of the sprite, but this part of it I'm OK with so far, it's just that when I move a sprite, it kind of gets there at different speed depending on the dpi / resolution of the device. I hope I'm making sense. This is the code I have so far, instead of using explicit amounts, like 1, I'm using something like the following: platSpeedFloat= (1 * (dpi/160)); //Use '1' so on an MDPI screen, the sprite will move by 1 physical pixel Then basically what I'm doing is something like this: (all varialble previously declared) platSpeedSave+=platSpeedFloat; //Add the platSpeedFloat value to the current platSpeedSave value platSpeed=(int) platSpeedSave; //Cast to int so it can be checked in the following statement if (platSpeed==platSpeedSave) //Check the casted int value to float value stored previoiusly {floorY=floorY-platSpeed; //If they match then change the Y value platSpeedSave=0;} //Reset Would be grateful if someone could assists - hope I'm making sense. The above doesn't seems to work the sprite moves 'faster' on lower DPI screens. Thanks

    Read the article

  • Using mod_rewrite for a Virtual Filesystem vs. Real Filesystem

    - by philtune
    I started working in a department that uses a CMS in which the entire "filesystem" is like this: create a named file or folder - this file is given a unique node (ex. 2345) as well as a default "filename" (ex. /WelcomeToOurProductsPage) and apply a template assign one or more aliases to the file for a URL redirect (ex. /home-page-products - can also be accessed by /home-page-products.aspx) A new Rewrite command is written on the .htaccess file for each and every alias Server accesses either /WelcomeToOurProductsPage or /home-page-products and redirects to something like /template.aspx?tmp=2&node=2345 (here I'm guessing what it does - I only have front-end access for now - but I have enough clues to strongly assume) Node 2345 grabs content stored in a SQL Db and applies it to the template. Note: There are no actual files being created on the filesystem. It's entirely virtual. This is probably a very common thing, but since I have never run across this kind of system before two months ago, I wanted to explain it in case it isn't common. I'm not a fan at all of ASP or closed-sourced systems, so it may be that this is common practice for ASP developers. My question, that has taken far too long to ask, is: what are the benefits of this kind of system, as opposed to creating an actual file hierarchy? Are there any drawbacks to having every single file server call redirected? To having the .htaccess file hold rewrite rules for every single alias?

    Read the article

  • How to move a selcection of files into a new folder via the right click menu?

    - by LinuxDudester
    I recently switch from OSX to Xubuntu 14.04 and I'm loving my new found freedom. For the most part I've managed to customize my Linux operating system to my needs and likes. But theres one feature I'm missing the most. I need to toss a bunch of items in a folder really fast, since I am working with lots of images and text files. In OS X there was a nifty shortcut that manages the operation in one fell swoop so you don't have to make a folder and then take further action to populate it. All I needed to is to select the items I want in the Finder (file manager), right-click on them to bring up OS X's contextual menu, and choose the first option: New Folder with Selection. The Finder will then create a new folder with those items stored safely inside, removing at least one step from the process for you automatically. Super easy! Now I was wondering how can I do this in Linux? Or most importantly in Xubuntu? Any help would be greatly appreciated!

    Read the article

  • Localization of Database Strings in .Net

    - by Aligned
    I have several database tables that have a description column that I need to display in the UI. .Net has .resx files that will help with the translation of the strings, when the Thread.CurrentCulture.UICulture is set, but I needed a custom approach for the strings that are stored in the database and not in the .resx files. Here’s my approach: 1. Create a resource file for each database table and put them in the /Resources/Database/ directory. 2. Create a method in LocalizationHelpers (GetLocalizedString) that will get the string from the table for English (which should be cached to avoid unneeded service/database calls) or the resx when not English. 3. All database tables need to have a ResxKey field that matches the key in the resx file. 4. By convention the resx file will have the same name as the database table, and the key the same as the database ResxKey.   - if there are multiple columns that need translation, then one ResxKey will be used and Name or Description appended. Here’s the method I’m using to pull the string: public static string GetLocalizedString(string resourceName, string resourceKey) { if (executingAssembly == null) { executingAssembly = Assembly.GetExecutingAssembly(); } ResourceManager manager = new ResourceManager(resourceName, executingAssembly); return manager.GetString(resourceKey); }

    Read the article

  • [LINQ] Master&ndash;Detail Same Record (I)

    - by JTorrecilla
    PROBLEM Firstly, I am working on a project based on LINQ, EF, and C# with VS2010. The following Table shows what I have and what I want to show. Header C1 C2 C3 1 P1 01/01/2011 2 P2 01/02/2011 Details 1 1 D1 2 1 D2 3 1 D3 4 2 D1 5 2 D4 Expected Results 1 P1 01/01/2011 D1, D2, D3 2 P2 01/02/2011 D1,D4   IDEAS At the begin I got 3 possible ways: - Doing inside the DB:  It could be achieved from DB with a CURSOR in a Stored Procedure. - Doing from .NET with LOOPS. - Doing with LINQ (I love it!!) FIRST APROX Example with a simple CLASS with a LIST: With and Employee Class that acts as Header Table: 1: public class Employee 2: { 3: public Employee () { } 4: public Int32 ID { get; set; } 5: public String FirstName{ get; set; } 6: public String LastName{ get; set; } 7: public List<string> Numbers{ get; set; } // Acts as Details Table 8: } We can show all numbers contained by Employee: 1: List<Employee > listado = new List<Employee >(); 2: //Fill Listado 3: var query= from Employee em in listado 4: let Nums= string.Join(";", em.Numbers) 5: select new { 6: em.Name, 7: Nums 8: }; The “LET” operator allows us to host the results of “Join” of the Numbers List of the Employee Class. A little example. ASAP I will post the second part to achieve the same with Entity Framework. Best Regards

    Read the article

  • ASP.Net MVC 3: multiple versions of the site without changing of URL, is it possible?

    - by Seacat
    Our website is written in ASP.NET MVC 3 and we want to change a feature in the core functionality of the site. The problem is not every client can be moved to this new version/format (because of some technical inner restrictions) so it means that we need to keep 2 versions of the same functionality (backend and frontend) simultaneously. We don't want our clients to worry about URLs, so the ideal solution would be keep the same URL but redirect clients to the different versions. The information about clients is stored in database. So the moment when user (client) logs in we know which version of site we should show. I'm thinking about routing and areas but I'm not sure if it's possible for example to have 2 areas with the different versions of the same application. Or is it possible to load the assemblies on the fly? After user is logged in we can decide if (s)he should be redirected to the new or old version. As far as all the clients have been moved to the new version we don't need this system more. How can I do this in ASP.NET MVC?

    Read the article

  • How to keep track of user images when using a CDN? [closed]

    - by Programmer
    We are considering moving our user profile images from the local server to the Rackspace CDN (Cloud Files). However, how do you keep track of where each user's profile image is located on the CDN? Wouldn't you have to store the CDN URL for each user image in the local Database and query it every time you display a user image? Isn't that slower than accessing a user image directly on the local server which requires no such DB query to retrieve since you already know where it is stored based on the user's User ID? What if a user has an album of pics? How would you keep track of all those images that belong just to that one user? What about the order of those pics? In the case of the Rackspace CDN, we're looking at using a Container for each individual user to help keep things more logically organized, but we don't know what the best way to track all of it is since the CDN provides a seemingly random URL for each image. To make matters worse, you can't even delete a non-empty Container belonging to a user when they delete their account, you actually have to delete each object inside the Container one-by-one before deleting the Container itself. It doesn't end there, you can't even have nested Containers or "sub-folders", and you can't rename a file (you must copy it with a new name and delete the old one manually). It just sounds so incredibly more complicated then we thought it would be, and it certainly does not feel "intuitive" compared to local storage, so we don't know what to do. Please help.

    Read the article

  • How to decrypt an encrypted Apple iTunes iPhone backup?

    - by afit
    I've been asked by a number of unfortunate iPhone users to help them restore data from their iTunes backups. This is easy when they are unencrypted, but not when they are encrypted, whether or not the password is known. As such, I'm trying to figure out the encryption scheme used on mddata and mdinfo files when encrypted. I have no problems reading these files otherwise, and have built some robust C# libraries for doing so. (If you're able to help, I don't care which language you use. It's the principle I'm after here!) The Apple "iPhone OS Enterprise Deployment Guide" states that "Device backups can be stored in encrypted format by selecting the Encrypt iPhone Backup option in the device summary pane of iTunes. Files are encrypted using AES128 with a 256-bit key. The key is stored securely in the iPhone keychain." That's a pretty good clue, and there's some good info here on Stackoverflow on iPhone AES/Rijndael interoperability suggesting a keysize of 128 and CBC mode may be used. Aside from any other obfuscation, a key and initialisation vector (IV)/salt are required. One might assume that the key is a manipulation of the "backup password" that users are prompted to enter by iTunes and passed to "AppleMobileBackup.exe", padded in a fashion dictated by CBC. However, given the reference to the iPhone keychain, I wonder whether the "backup password" might not be used as a password on an X509 certificate or symmetric private key, and that the certificate or private key itself might be used as the key. (AES and the iTunes encrypt/decrypt process is symmetric.) The IV is another matter, and it could be a few things. Perhaps it's one of the keys hard-coded into iTunes, or into the devices themselves. Although Apple's comment above suggests the key is present on the device's keychain, I think this isn't that important. One can restore an encrypted backup to a different device, which suggests all information relevant to the decryption is present in the backup and iTunes configuration, and that anything solely on the device is irrelevant and replacable in this context. So where might be the key be? I've listed paths below from a Windows machine but it's much of a muchness whichever OS we use. The "\appdata\Roaming\Apple Computer\iTunes\itunesprefs.xml" contains a PList with a "Keychain" dict entry in it. The "\programdata\apple\Lockdown\09037027da8f4bdefdea97d706703ca034c88bab.plist" contains a PList with "DeviceCertificate", "HostCertificate", and "RootCertificate", all of which appear to be valid X509 certs. The same file also appears to contain asymmetric keys "RootPrivateKey" and "HostPrivateKey" (my reading suggests these might be PKCS #7-enveloped). Also, within each backup there are "AuthSignature" and "AuthData" values in the Manifest.plist file, although these appear to be rotated as each file gets incrementally backed up, suggested they're not that useful as a key, unless something really quite involved is being done. There's a lot of misleading stuff out there suggesting getting data from encrypted backups is easy. It's not, and to my knowledge it hasn't been done. Bypassing or disabling the backup encryption is another matter entirely, and is not what I'm looking to do. This isn't about hacking apart the iPhone or anything like that. All I'm after here is a means to extract data (photos, contacts, etc.) from encrypted iTunes backups as I can unencrypted ones. I've tried all sorts of permutations with the information I've put down above but got nowhere. I'd appreciate any thoughts or techniques I might have missed.

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >