Search Results

Search found 21317 results on 853 pages for 'key mapping'.

Page 636/853 | < Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >

  • Using Python's ConfigParser to read a file without section name

    - by Arrieta
    Hello: I am using ConfigParser to read the runtime configuration of a script. I would like to have the flexibility of not providing a section name (there are scripts which are simple enough; they don't need a 'section'). ConfigParser will throw the NoSectionError exception, and will not accept the file. How can I make ConfigParser simply retrieve the (key, value) tuples of a config file without section names? For instance: key1=val1 key2:val2 I would rather not write to the config file.

    Read the article

  • Tell The Program What To Do When No Save Data Is Found NSUserDefaults, iPhone

    - by Stumf
    Hello all, I have saved data which was saved using NSUserDefaults. I was under the impression that if there was nothing saved to the key already (first time app is run) it would default to 0. This however doesn't seem to be the case. Here is what I have: To save: - (void)viewWillDisappear:(BOOL)animated { [[NSUserDefaults standardUserDefaults] setInteger:setbatteryHealthCalculated forKey:@"healthValue"]; [[NSUserDefaults standardUserDefaults] synchronize]; } To load: - (void) viewWillAppear:(BOOL)animated { NSInteger setbatteryHealthCalculated = [[NSUserDefaults standardUserDefaults] integerForKey:@"healthValue"]; } To check for save value: - (IBAction)check{ NSInteger setbatteryHealthCalculated = [[NSUserDefaults standardUserDefaults] integerForKey:@"healthValue"]; if (setbatteryHealthCalculated = 0) { [self performSelector:@selector(displayAlert) withObject:nil]; }

    Read the article

  • NSSortDescriptor for NSFetchRequest sorting unexpectedly

    - by E-Madd
    My entity has a property (sortOrder) that is of the type Decimal(NSDecimalNumber) but when I execute a fetch request using that property as a key, I get back results in a seemingly random order. If I output the value of the property I get really strange values until I get it's intValue. Example: The first run produces this result. The first value is the raw value of the property. The second is the intValue, the actual value of the property when I created the object - or at least I thought. 85438160 10 74691424 20 Second run... 85333744 10 85339168 20 Third... 85263696 20 85269568 10 What the hell?

    Read the article

  • Actionscript 3 Navigate with Keyboard Between Labels

    - by Sbml
    Hello I need to navigate between labels with arrow keys like a power point presentation. I have an array with labels and a KeyboardEvent. My problem is, if I am in label number four for example and click in arrow click, always goes to first label. So I need help defining my current label to go to the next on key press. My code: import flash.events.KeyboardEvent; var myLabels:Array = [ "label_1", "label_2", "label_3", "label_4"]; var nextLabel:String; var inc:int = 0; stage.addEventListener(KeyboardEvent.KEY_DOWN, keyPressed); function keyPressed(evt:KeyboardEvent):void { switch(evt.keyCode) { case Keyboard.RIGHT : nextLabel = String(myLabels[inc]); gotoAndStop(nextLabel); inc++; break; } } Thanks

    Read the article

  • SoundPlayer causing Memory Leaks?

    - by Nick Udell
    I'm writing a basic writing app in C# and I wanted to have the program make typewriter sounds as you typed. I've hooked the KeyPress event on my RichTextBox to a function that uses a SoundPlayer to play a short wav file every time a key is pressed, however I've noticed after a while my computer slows to a crawl and checking my processes, audiodlg.exe was using 5 GIGABYTES of RAM. The code I'm using is as follows: I initialise the SoundPlayer as a global variable on program start with SoundPlayer sp = new SoundPlayer("typewriter.wav") Then on the KeyPress event I simply call sp.Play(); Does anybody know what's causing the heavy memory usage? The file is less than a second long, so it shouldn't be clogging the thing up too much.

    Read the article

  • Access event.target in IE8 unobstrusive Javascript

    - by Mirko
    The following function gets the target element in a dropdown menu: function getTarget(evt){ var targetElement = null; //if it is a standard browser if (typeof evt.target != 'undefined'){ targetElement = evt.target; } //otherwise it is IE then adapt syntax else{ targetElement = evt.srcElement; } //return id of element when hovering over or if (targetElement.nodeName.toLowerCase() == 'li'){ return targetElement; } else if (targetElement.parentNode.nodeName.toLowerCase() == 'li'){ return targetElement.parentNode; } else{ return targetElement; } Needless to say, it works in Firefox, Chrome, Safari and Opera but it does not in IE8 (and I guess in previous versions as well). When I try to debug it with IE8 I get the error "Member not Found" on the line: targetElement = evt.srcElement; along with other subsequent errors, but I think this is the key line. Any help will be appreciated.

    Read the article

  • php, mySQL - how to create a table?

    - by user296516
    Hi guys, I have written a code that should check weather there is a table called imei.$addimei and, if not, create it... $userdatabase = mysqli_connect('localhost', 'root', 'marina', 'imei'); ... $result = mysqli_query($userdatabase, "SELECT * FROM imei".$addimei."" ); if ( !$result ) { echo('creating table...'); /// if no such table, make one! mysql_query ( $userdatabase, 'CREATE TABLE imei'.$addimei.'( ID int NOT NULL AUTO_INCREMENT, PRIMARY KEY(ID), EVENT varchar(15), TIME varchar(25), FLD1 varchar(35), FLD2 varchar(35), IP varchar(25), )' ); } Yet the CREATE TABLE somehow doesn't seem to work. Warning: mysql_query() expects parameter 1 to be string, object given in C:\xampp\xampp\htdocs\mobi\mainmenu.php on line 564 Any idea's what wrong? Thanks!

    Read the article

  • Wmode transparent and symbols issue with foreign keyboard

    - by Boun
    Hello, I have the known issue with wmode transparent and input textfield in my page. I know that the question is often asked but I have a special situation with that case. I need to embed my swf with wmode=transparent but in my swf I have an input textfield and the bug with "@" or "." symbols exists. I have a french keyboard and I decided to overcome this problem with a string replacement with the FirefoxWmodeFix class from Manmaru (see the link below). http://www.manmaru.fr/mlab/?p=95 It works for my keyboard but I need that "trick" with EN/DE/IT keyboard. Could anyone help me with the right combination of keys on different keyboard and different OS system to display "@" and "." symbols. I don't have any foreign Windows system to get the key code for each combination. Or if anyone has another solution, I will be pleased to hear that one. Thanks in advance for any help.

    Read the article

  • Google Maps - Reserve Geocode -> Error "invalid label"

    - by Newbie
    Hello! I have the coordinates of my marker. Now I want to get the address of the marker. So I searched the web and found google maps reserve geocode. Now I tried to do the following: $.getJSON('http://maps.google.com/maps/api/geocode/json?latlng='+point.lat()+','+ point.lng() +'&key='+apiKey+'&sensor=false&output=json&callback=?', function(data) { console.log(data); }); When I try to show the address, meaning getting the json, firebug throws the following error: invalid label on "status": "OK",\n I searched a lot, but didn't find an answer solving my problem. Can you tell me whats wrong with my code? Is there another way to get the address data for the coordinates?

    Read the article

  • GLSL: Strange light reflections [Solved]

    - by Tom
    According to this tutorial I'm trying to make a normal mapping using GLSL, but something is wrong and I can't find the solution. The output render is in this image: Image1 in this image is a plane with two triangles and each of it is different illuminated (that is bad). The plane has 6 vertices. In the upper left side of this plane are 2 identical vertices (same in the lower right). Here are some vectors same for each vertice: normal vector = 0, 1, 0 (red lines on image) tangent vector = 0, 0,-1 (green lines on image) bitangent vector = -1, 0, 0 (blue lines on image) here I have one question: The two identical vertices does need to have the same tangent and bitangent? I have tried to make other values to the tangents but the effect was still similar. Here are my shaders Vertex shader: #version 130 // Input vertex data, different for all executions of this shader. in vec3 vertexPosition_modelspace; in vec2 vertexUV; in vec3 vertexNormal_modelspace; in vec3 vertexTangent_modelspace; in vec3 vertexBitangent_modelspace; // Output data ; will be interpolated for each fragment. out vec2 UV; out vec3 Position_worldspace; out vec3 EyeDirection_cameraspace; out vec3 LightDirection_cameraspace; out vec3 LightDirection_tangentspace; out vec3 EyeDirection_tangentspace; // Values that stay constant for the whole mesh. uniform mat4 MVP; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Output position of the vertex, in clip space : MVP * position gl_Position = MVP * vec4(vertexPosition_modelspace,1); // Position of the vertex, in worldspace : M * position Position_worldspace = (M * vec4(vertexPosition_modelspace,1)).xyz; // Vector that goes from the vertex to the camera, in camera space. // In camera space, the camera is at the origin (0,0,0). vec3 vertexPosition_cameraspace = ( V * M * vec4(vertexPosition_modelspace,1)).xyz; EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace; // Vector that goes from the vertex to the light, in camera space. M is ommited because it's identity. vec3 LightPosition_cameraspace = ( V * vec4(LightPosition_worldspace,1)).xyz; LightDirection_cameraspace = LightPosition_cameraspace + EyeDirection_cameraspace; // UV of the vertex. No special space for this one. UV = vertexUV; // model to camera = ModelView vec3 vertexTangent_cameraspace = MV3x3 * vertexTangent_modelspace; vec3 vertexBitangent_cameraspace = MV3x3 * vertexBitangent_modelspace; vec3 vertexNormal_cameraspace = MV3x3 * vertexNormal_modelspace; mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); // You can use dot products instead of building this matrix and transposing it. See References for details. LightDirection_tangentspace = TBN * LightDirection_cameraspace; EyeDirection_tangentspace = TBN * EyeDirection_cameraspace; } Fragment shader: #version 130 // Interpolated values from the vertex shaders in vec2 UV; in vec3 Position_worldspace; in vec3 EyeDirection_cameraspace; in vec3 LightDirection_cameraspace; in vec3 LightDirection_tangentspace; in vec3 EyeDirection_tangentspace; // Ouput data out vec3 color; // Values that stay constant for the whole mesh. uniform sampler2D DiffuseTextureSampler; uniform sampler2D NormalTextureSampler; uniform sampler2D SpecularTextureSampler; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Light emission properties // You probably want to put them as uniforms vec3 LightColor = vec3(1,1,1); float LightPower = 40.0; // Material properties vec3 MaterialDiffuseColor = texture2D( DiffuseTextureSampler, vec2(UV.x,-UV.y) ).rgb; vec3 MaterialAmbientColor = vec3(0.1,0.1,0.1) * MaterialDiffuseColor; //vec3 MaterialSpecularColor = texture2D( SpecularTextureSampler, UV ).rgb * 0.3; vec3 MaterialSpecularColor = vec3(0.5,0.5,0.5); // Local normal, in tangent space. V tex coordinate is inverted because normal map is in TGA (not in DDS) for better quality vec3 TextureNormal_tangentspace = normalize(texture2D( NormalTextureSampler, vec2(UV.x,-UV.y) ).rgb*2.0 - 1.0); // Distance to the light float distance = length( LightPosition_worldspace - Position_worldspace ); // Normal of the computed fragment, in camera space vec3 n = TextureNormal_tangentspace; // Direction of the light (from the fragment to the light) vec3 l = normalize(LightDirection_tangentspace); // Cosine of the angle between the normal and the light direction, // clamped above 0 // - light is at the vertical of the triangle -> 1 // - light is perpendicular to the triangle -> 0 // - light is behind the triangle -> 0 float cosTheta = clamp( dot( n,l ), 0,1 ); // Eye vector (towards the camera) vec3 E = normalize(EyeDirection_tangentspace); // Direction in which the triangle reflects the light vec3 R = reflect(-l,n); // Cosine of the angle between the Eye vector and the Reflect vector, // clamped to 0 // - Looking into the reflection -> 1 // - Looking elsewhere -> < 1 float cosAlpha = clamp( dot( E,R ), 0,1 ); color = // Ambient : simulates indirect lighting MaterialAmbientColor + // Diffuse : "color" of the object MaterialDiffuseColor * LightColor * LightPower * cosTheta / (distance*distance) + // Specular : reflective highlight, like a mirror MaterialSpecularColor * LightColor * LightPower * pow(cosAlpha,5) / (distance*distance); //color.xyz = E; //color.xyz = LightDirection_tangentspace; //color.xyz = EyeDirection_tangentspace; } I have replaced the original color value by EyeDirection_tangentspace vector and then I got other strange effect but I can not link the image (not eunogh reputation) Is it possible that with this shaders is something wrong, or maybe in other place in my code e.g with my matrices?

    Read the article

  • Help needed with MySQL query to join data spanning multiple tables with data used as column names

    - by gurun8
    I need a little help putting together a SQL query that will give me the following resultsets: and The data model looks like this: The tricky part for me is that the columns to the right of the "Product" in the resultset aren't really columns in the database but rather key/value pairs spanned across the data model. Table data is as follows: My apologies in advance for the image heavy question and the image quality. This just seemed like the easiest way to convey the information. It'll probably take someone less time to write the query statement to achieve the results than it did for me to assemble this question. By the way, the "product_option" table image is truncated but it illustrated the general idea of the data structure. The MySQL server version is 5.1.45.

    Read the article

  • Does DB2 have an "insert or update" statement?

    - by Mikael Eriksson
    From my code (Java) I want to ensure that a row exists in the database (DB2) after my code is executed. My code now does a select and if no result is returned it does an insert. I really don't like this code since it exposes me to concurrency issuses when running in a multi-threaded environment. What I would like to do is to put this logic in DB2 instead of in my Java code. Does DB2 have an "insert-or-update" statement? Or anything like it that I can use? For example: insertupdate into mytable values ('myid') Another way of doing it would probably be to allways do the insert and catch "SQL-code -803 primary key already exists", but I would like to avoid that if possible.

    Read the article

  • What stage of normalization is this? (moving repeating data into separate table)

    - by Sergio
    Hi There, I have noticed that when designing a database I tend to shift any repeating sets of data into a separate table. For example, say I had a table of people, with each person living in a state. I would then move these repeating states into a separate table and reference them with foreign keys. However, what if I was not storing any more data about states. I would then have a table with StateID and State in. Is this action correct? State is dependant on the primary key of the users table, so does shifting it into its own table help with anything? Thanks,

    Read the article

  • Lose changed data in session

    - by user150528
    Our asp.net 2.0 application has a very long process (synchronized) before sending response back to client. I observed that a second request, exactly same the initial one, was sent after client IE8 waited response for a long period of time while our application was still processing the first request. I use page session with predefined key to store a flag when the initial request arrives and then starts long process while client IE waits for the response, so if second request comes in, our application checks the session value. After our application sets the session flag and starts processing, I use Fiddler “Abort Session” to abort the initial request, right away the second request (same as the first one) is sent automatically, but session value set earlier seems no longer exist. Any thoughts?

    Read the article

  • Web.config encryption/decryption

    - by Akshay
    In my applications we.config file I have a connection string stored. I encrypted it using '---open the web.config file Dim config As Configuration = _ ConfigurationManager.OpenWebConfiguration( _ Request.ApplicationPath) '---indicate the section to protect Dim section As ConfigurationSection = _ config.Sections("connectionStrings") '---specify the protection provider section.SectionInformation.ProtectSection(protectionProvider) '---Apple the protection and update config.Save() Now I can decrypt it using the code Dim config As Configuration = _ ConfigurationManager.OpenWebConfiguration( _ Request.ApplicationPath) Dim section As ConfigurationSection = _ config.Sections("connectionStrings") section.SectionInformation.UnProtectSection() config.Save() I want to know where is the key stored. Also If somehow my web.config file is stolen, will it be possible for him/her to decrypt it using thhe code above.

    Read the article

  • Entity Fremework serialization

    - by Alexandr
    Hello guys! I was confused with my problem. I'm using Entity Framework and want to save entities on hard disk and then to restore them. I have no problem with Serializing/Deserializing but i get an exception "The object cannot be added to the ObjectStateManager because it already has an EntityKey. Use ObjectContext.Attach to attach an object that has an existing key" when i try to add deserialized object to my datacontext. And nothing happens when i just Attach my entity to datacontext How to achieve my goal? Thx in advance! -Alexandr-

    Read the article

  • Sort by an object's type

    - by Richard Levasseur
    Hi all, I have code that statically registers (type, handler_function) pairs at module load time, resulting in a dict like this: HANDLERS = { str: HandleStr, int: HandleInt, ParentClass: HandleCustomParent, ChildClass: HandleCustomChild } def HandleObject(obj): for data_type in sorted(HANDLERS.keys(), ???): if isinstance(obj, data_type): HANDLERS[data_type](obj) Where ChildClass inherits from ParentClass. The problem is that, since its a dict, the order isn't defined - but how do I introspect type objects to figure out a sort key? The resulting order should be child classes follow by super classes (most specific types first). E.g. str comes before basestring, and ChildClass comes before ParentClass. If types are unrelated, it doesn't matter where they go relative to each other.

    Read the article

  • Access to selection in gmail message body with Google Apps Script

    - by Mike Ellis
    Can app scripts access the current selection in a gmail message? I frequently compose messages that include engineering calculations and make use of the Google Calc feature do the calculation or convert to the desired units, e.g. 4000 Btu/hr * 8 hrs in kWh It would be really convenient to be able to select the above, hit a mapped key (e.g. Ctrl-K) and have the inserted after the expression 4000 Btu/hr * 8 hrs in kWh = 0.9378 kWh instead of having to paste the expression into a search box and then copy and paste the answer. I could certainly write a solution using a keymapper and a small python script to grab the current selection, send it to the gcalc api, etc ..., but my real motivation is to get familiar with Apps Scripts's capabilities and limitations. I suppose the uber-question here is "what kinds of user actions and state information can App Script access in Gmail messages (and/or Google docs) that are being edited?"

    Read the article

  • Custom validation works in development but not in unit test

    - by Geolev
    I want to validate that at least one of two columns have a value in my model. I found somewhere on the web that I could create a custom validator as follows: # Check for the presence of one or another field: # :validates_presence_of_at_least_one_field :last_name, :company_name - would require either last_name or company_name to be filled in # also works with arrays # :validates_presence_of_at_least_one_field :email, [:name, :address, :city, :state] - would require email or a mailing type address module ActiveRecord module Validations module ClassMethods def validates_presence_of_at_least_one_field(*attr_names) msg = attr_names.collect {|a| a.is_a?(Array) ? " ( #{a.join(", ")} ) " : a.to_s}.join(", ") + "can't all be blank. At least one field must be filled in." configuration = { :on => :save, :message => msg } configuration.update(attr_names.extract_options!) send(validation_method(configuration[:on]), configuration) do |record| found = false attr_names.each do |a| a = [a] unless a.is_a?(Array) found = true a.each do |attr| value = record.respond_to?(attr.to_s) ? record.send(attr.to_s) : record[attr.to_s] found = !value.blank? end break if found end record.errors.add_to_base(configuration[:message]) unless found end end end end end I put this in a file called lib/acs_validator.rb in my project and added "require 'acs_validator'" to my environment.rb. This does exactly what I want. It works perfectly when I manually test it in the development environment but when I write a unit test it breaks my test environment. This is my unit test: require 'test_helper' class CustomerTest < ActiveSupport::TestCase # Replace this with your real tests. test "the truth" do assert true end test "customer not valid" do puts "customer not valid" customer = Customer.new assert !customer.valid? assert customer.errors.invalid?(:subdomain) assert_equal "Company Name and Last Name can't both be blank.", customer.errors.on(:contact_lname) end end This is my model: class Customer < ActiveRecord::Base validates_presence_of :subdomain validates_presence_of_at_least_one_field :customer_company_name, :contact_lname, :message => "Company Name and Last Name can't both be blank." has_one :service_plan end When I run the unit test, I get the following error: DEPRECATION WARNING: Rake tasks in vendor/plugins/admin_data/tasks, vendor/plugins/admin_data/tasks, and vendor/plugins/admin_data/tasks are deprecated. Use lib/tasks instead. (called from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/tasks/rails.rb:10) Couldn't drop acs_test : #<ActiveRecord::StatementInvalid: PGError: ERROR: database "acs_test" is being accessed by other users DETAIL: There are 1 other session(s) using the database. : DROP DATABASE IF EXISTS "acs_test"> acs_test already exists NOTICE: CREATE TABLE will create implicit sequence "customers_id_seq" for serial column "customers.id" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "customers_pkey" for table "customers" NOTICE: CREATE TABLE will create implicit sequence "service_plans_id_seq" for serial column "service_plans.id" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "service_plans_pkey" for table "service_plans" /usr/bin/ruby1.8 -I"lib:test" "/usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/unit/customer_test.rb" "test/unit/service_plan_test.rb" "test/unit/helpers/dashboard_helper_test.rb" "test/unit/helpers/customers_helper_test.rb" "test/unit/helpers/service_plans_helper_test.rb" /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.8/lib/active_record/base.rb:1994:in `method_missing_without_paginate': undefined method `validates_presence_of_at_least_one_field' for #<Class:0xb7076bd0> (NoMethodError) from /usr/lib/ruby/gems/1.8/gems/will_paginate-2.3.12/lib/will_paginate/finder.rb:170:in `method_missing' from /home/george/projects/advancedcomfortcs/app/models/customer.rb:3 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:158:in `require' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:265:in `require_or_load' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:224:in `depend_on' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:136:in `require_dependency' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:414:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:413:in `each' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:413:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:411:in `each' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:411:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:197:in `process' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:113:in `send' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:113:in `run' from /home/george/projects/advancedcomfortcs/config/environment.rb:9 from ./test/test_helper.rb:2:in `require' from ./test/test_helper.rb:2 from ./test/unit/customer_test.rb:1:in `require' from ./test/unit/customer_test.rb:1 from /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5:in `load' from /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5 from /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5:in `each' from /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5 rake aborted! Command failed with status (1): [/usr/bin/ruby1.8 -I"lib:test" "/usr/lib/ru...] (See full trace by running task with --trace) It seems to have stepped on will_paginate somehow. Does anyone have any suggestions? Is there another way to do the validation I'm attempting to do? Thanks, George

    Read the article

  • LRU caches in C

    - by lazyconfabulator
    I need to cache a large (but variable) number of smallish (1 kilobyte to 10 megabytes) files in memory, for a C application (in a *nix environment). Since I don't want to eat all my memory, I'd like to set hard memory limit (say, 64 megabytes) and push files into a hash table with the file name as the key and dispose of the entries with the least use. What I believe I need is an LRU cache. Really, I'd rather not roll my own so if someone knows where I can find a workable library, please point the way? Failing that, can someone provide a simple example of an LRU cache in C? Related posts indicated that a hash table with a doubly-linked list, but I'm not even clear on how a doubly-linked list keeps LRU. Side note: I realize this is almost exactly the function of memcache, but it's not an option for me. I also took a look at the source hoping to enlighten myself on LRU caching, with no success.

    Read the article

  • Custom certificate as proof of transaction

    - by Andy
    I'm developing a site where a user conducts a given transaction and once completed, the user is issued with a 'secure certificate'. The certificate serves as proof of the transaction and the user is able to upload the certificate at a later stage, to view the details of the transaction. At the moment I'm using a custom XML document with encrypted fields. It works perfect, but I would like a standardized approach, such as an X.509 certificate. I'm no encryption expert, but from what I gather, X.509 is more geared towards SSL issued by a CA. Is it possible to create your own valid valid CRT file? As a test, I created a CRT file with the example provided on WikiPedia. However, when I open the file in Windows I get this warning: Invalid Public Key Security Object File - This file is invalid as the following: Security Certificate. Not having much luck here, so time to ask the experts. What direction should I be heading in? Any guidance would be greatly appreciated.

    Read the article

  • How to handle a DataTemplate creation

    - by Shimmy
    Hello! Take a look at the following xaml: <ListBox ItemsSource="{Binding}"> <ListBox.Resources> <CollectionViewSource x:Key="CVS"/> </ListBox.Resources> <ListBox.DataTemplate> <DataTemplate OnBinding="myBinding"> <ListBox DataContext="{StaticResource CVS}" ItemsSource="{Binding}" /> </DataTemplate> </ListBox.DataTemplate> </ListBox> So I can handle the binding and manually retrieve the CVS and set its Source property to my custom stuff according to the DataTemplate's DataContext. Or else there is a different way in doing it. Any ideas are welcommed!

    Read the article

  • Rabbitmq queue sharding

    - by kolchanov
    I have to implement this scenario: An external application publish message to rabbitmq. This message has a client_id property. We can place this id to routing key or message header or some other property. I have to implement sharding in a exchange routng logic - the message should be delivered to specific queue based on the client_id range. Is it possible to implement in a standard exchanges? If not what exchange should I take as the base? How to dynamicly change client_id ranges?

    Read the article

  • Office 2010: It&rsquo;s not just DOC(X) and XLS(X)

    - by andrewbrust
    Office 2010 has released to manufacturing.  The bits have left the (product team’s) building.  Will you upgrade? This version of Office is officially numbered 14, a designation that correlates with the various releases, through the years, of Microsoft Word.  There were six major versions of Word for DOS, during whose release cycles came three 16-bit Windows versions.  Then, starting with Word 95 and counting through Word 2007, there have been six more versions – all for the 32-bit Windows platform.  Skip version 13 to ward off folksy bad luck (and, perhaps, the bugs that could come with it) and that brings us to version 14, which includes implementations for both 32- and 64-bit Windows platforms.  We’ve come a long way baby.  Or have we? As it does every three years or so, debate will now start to rage on over whether we need a “14th” version the PC platform’s standard word processor, or a “13th” version of the spreadsheet.  If you accept the premise of that question, then you may be on a slippery slope toward answering it in the negative.  Thing is, that premise is valid for certain customers and not others. The Microsoft Office product has morphed from one that offered core word processing, spreadsheet, presentation and email functionality to a suite of applications that provides unique, new value-added features, and even whole applications, in the context of those core services.  The core apps thus grow in mission: Excel is a BI tool.  Word is a collaborative editorial system for the production of publications.  PowerPoint is a media production platform for for live presentations and, increasingly, for delivering more effective presentations online.  Outlook is a time and task management system.  Access is a rich client front-end for data-driven self-service SharePoint applications.  OneNote helps you capture ideas, corral random thoughts in a semi-structured way, and then tie them back to other, more rigidly structured, Office documents. Google Docs and other cloud productivity platforms like Zoho don’t really do these things.  And there is a growing chorus of voices who say that they shouldn’t, because those ancillary capabilities are over-engineered, over-produced and “under-necessary.”  They might say Microsoft is layering on superfluous capabilities to avoid admitting that Office’s core capabilities, the ones people really need, have become commoditized. It’s hard to take sides in that argument, because different people, and the different companies that employ them, have different needs.  For my own needs, it all comes down to three basic questions: will the new version of Office save me time, will it make the mundane parts of my job easier, and will it augment my services to customers?  I need my time back.  I need to spend more of it with my family, and more of it focusing on my own core capabilities rather than the administrative tasks around them.  And I also need my customers to be able to get more value out of the services I provide. Help me triage my inbox, help me get proposals done more quickly and make them easier to read.  Let me get my presentations done faster, make them more effective and make it easier for me to reuse materials from other presentations.  And, since I’m in the BI and data business, help me and my customers manage data and analytics more easily, both on the desktop and online. Those are my criteria.  And, with those in mind, Office 2010 is looking like a worthwhile upgrade.  Perhaps it’s not earth-shattering, but it offers a combination of incremental improvements and a few new major capabilities that I think are quite compelling.  I provide a brief roundup of them here.  It’s admittedly arbitrary and not comprehensive, but I think it tells the Office 2010 story effectively. Across the Suite More than any other, this release of Office aims to give collaboration a real workout.  In certain apps, for the first time, documents can be opened simultaneously by multiple users, with colleagues’ changes appearing in near real-time.  Web-browser-based versions of Word, Excel, PowerPoint and OneNote will be available to extend collaboration to contributors who are off the corporate network. The ribbon user interface is now more pervasive (for example, it appears in OneNote and in Outlook’s main window).  It’s also customizable, allowing users to add, easily, buttons and options of their choosing, into new tabs, or into new groups within existing tabs. Microsoft has also taken the File menu (which was the “Office Button” menu in the 2007 release) and made it into a full-screen “Backstage” view where document-wide operations, like saving, printing and online publishing are performed. And because, more and more, heavily formatted content is cut and pasted between documents and applications, Office 2010 makes it easier to manage the retention or jettisoning of that formatting right as the paste operation is performed.  That’s much nicer than stripping it off, or adding it back, afterwards. And, speaking of pasting, a number of Office apps now make it especially easy to insert screenshots within their documents.  I know that’s useful to me, because I often document or critique applications and need to show them in action.  For the vast majority of users, I expect that this feature will be more useful for capturing snapshots of Web pages, but we’ll have to see whether this feature becomes popular.   Excel At first glance, Excel 2010 looks and acts nearly identically to the 2007 version.  But additional glances are necessary.  It’s important to understand that lots of people in the working world use Excel as more of a database, analytics and mathematical modeling tool than merely as a spreadsheet.  And it’s also important to understand that Excel wasn’t designed to handle such workloads past a certain scale.  That all changes with this release. The first reason things change is that Excel has been tuned for performance.  It’s been optimized for multi-threaded operation; previously lengthy processes have been shortened, especially for large data sets; more rows and columns are allowed and, for the first time, Excel (and the rest of Office) is available in a 64-bit version.  For Excel, this means users can take advantage of more than the 2GB of memory that the 32-bit version is limited to. On the analysis side, Excel 2010 adds Sparklines (tiny charts that fit into a single cell and can therefore be presented down an entire column or across a row) and Slicers (a more user-friendly filter mechanism for PivotTables and charts, which visually indicates what the filtered state of a given data member is).  But most important, Excel 2010 supports the new PowerPIvot add-in which brings true self-service BI to Office.  PowerPivot allows users to import data from almost anywhere, model it, and then analyze it.  Rather than forcing users to build “spreadmarts” or use corporate-built data warehouses, PowerPivot models function as true columnar, in-memory OLAP cubes that can accommodate millions of rows of data and deliver fast drill-down performance. And speaking of OLAP, Excel 2010 now supports an important Analysis Services OLAP feature called write-back.  Write-back is especially useful in financial forecasting scenarios for which Excel is the natural home.  Support for write-back is long overdue, but I’m still glad it’s there, because I had almost given up on it.   PowerPoint This version of PowerPoint marks its progression from a presentation tool to a video and photo editing and production tool.  Whether or not it’s successful in this pursuit, and if offering this is even a sensible goal, is another question. Regardless, the new capabilities are kind of interesting.  A greatly enhanced set of slide transitions with 3D effects; in-product photo and video editing; accommodation of embedded videos from services such as YouTube; and the ability to save a presentation as a video each lay testimony to PowerPoint’s transformation into a media tool and away from a pure presentation tool. These capabilities also recognize the importance of the Web as both a source for materials and a channel for disseminating PowerPoint output. Congruent with that is PowerPoint’s new ability to broadcast a slide presentation, using a quickly-generated public URL, without involving the hassle or expense of a Web meeting service like GoToMeeting or Microsoft’s own LiveMeeting.  Slides presented through this broadcast feature retain full color fidelity and transitions and animations are preserved as well.   Outlook Microsoft’s ubiquitous email/calendar/contact/task management tool gains long overdue speed improvements, especially against POP3 email accounts.  Outlook 2010 also supports multiple Exchange accounts, rather than just one; tighter integration with OneNote; and a new Social Connector providing integration with, and presence information from, online social network services like LinkedIn and Facebook (not to mention Windows Live).  A revamped conversation view now includes messages that are part of a given thread regardless of which folder they may be stored in. I don’t know yet how well the Social Connector will work or whether it will keep Outlook relevant to those who live on Facebook and LinkedIn.  But among the other features, there’s very little not to like.   OneNote To me, OneNote is the part of Office that just keeps getting better.  There is one major caveat to this, which I’ll cover in a moment, but let’s first catalog what new stuff OneNote 2010 brings.  The best part of OneNote, is the way each of its versions have managed hierarchy: Notebooks have sections, sections have pages, pages have sub pages, multiple notes can be contained in either, and each note supports infinite levels of indentation.  None of that is new to 2010, but the new version does make creation of pages and subpages easier and also makes simple work out of promoting and demoting pages from sub page to full page status.  And relationships between pages are quite easy to create now: much like a Wiki, simply typing a page’s name in double-square-brackets (“[[…]]”) creates a link to it. OneNote is also great at integrating content outside of its notebooks.  With a new Dock to Desktop feature, OneNote becomes aware of what window is displayed in the rest of the screen and, if it’s an Office document or a Web page, links the notes you’re typing, at the time, to it.  A single click from your notes later on will bring that same document or Web page back on-screen.  Embedding content from Web pages and elsewhere is also easier.  Using OneNote’s Windows Key+S combination to grab part of the screen now allows you to specify the destination of that bitmap instead of automatically creating a new note in the Unfiled Notes area.  Using the Send to OneNote buttons in Internet Explorer and Outlook result in the same choice. Collaboration gets better too.  Real-time multi-author editing is better accommodated and determining author lineage of particular changes is easily carried out. My one pet peeve with OneNote is the difficulty using it when I’m not one a Windows PC.  OneNote’s main competitor, Evernote, while I believe inferior in terms of features, has client versions for PC, Mac, Windows Mobile, Android, iPhone, iPad and Web browsers.  Since I have an Android phone and an iPad, I am practically forced to use it.  However, the OneNote Web app should help here, as should a forthcoming version of OneNote for Windows Phone 7.  In the mean time, it turns out that using OneNote’s Email Page ribbon button lets you move a OneNote page easily into EverNote (since every EverNote account gets a unique email address for adding notes) and that Evernote’s Email function combined with Outlook’s Send to OneNote button (in the Move group of the ribbon’s Home tab) can achieve the reverse.   Access To me, the big change in Access 2007 was its tight integration with SharePoint lists.  Access 2010 and SharePoint 2010 continue this integration with the introduction of SharePoint’s Access Services.  Much as Excel Services provides a SharePoint-hosted experience for viewing (and now editing) Excel spreadsheet, PivotTable and chart content, Access Services allows for SharePoint browser-hosted editing of Access data within the forms that are built in the Access client itself. To me this makes all kinds of sense.  Although it does beg the question of where to draw the line between Access, InfoPath, SharePoint list maintenance and SharePoint 2010’s new Business Connectivity Services.  Each of these tools provide overlapping data entry and data maintenance functionality. But if you do prefer Access, then you’ll like  things like templates and application parts that make it easier to get off the blank page.  These features help you quickly get tables, forms and reports built out.  To make things look nice, Access even gets its own version of Excel’s Conditional Formatting feature, letting you add data bars and data-driven text formatting.   Word As I said at the beginning of this post, upgrades to Office are about much more than enhancing the suite’s flagship word processing application. So are there any enhancements in Word worth mentioning?  I think so.  The most important one has to be the collaboration features.  Essentially, when a user opens a Word document that is in a SharePoint document library (or Windows Live SkyDrive folder), rather than the whole document being locked, Word has the ability to observe more granular locks on the individual paragraphs being edited.  Word also shows you who’s editing what and its Save function morphs into a sync feature that both saves your changes and loads those made by anyone editing the document concurrently. There’s also a new navigation pane that lets you manage sections in your document in much the same way as you manage slides in a PowerPoint deck.  Using the navigation pane, you can reorder sections, insert new ones, or promote and demote sections in the outline hierarchy.  Not earth shattering, but nice.   Other Apps and Summarized Findings What about InfoPath, Publisher, Visio and Project?  I haven’t looked at them yet.  And for this post, I think that’s fine.  While those apps (and, arguably, Access) cater to specific tasks, I think the apps we’ve looked at in this post service the general purpose needs of most users.  And the theme in those 2010 apps is clear: collaboration is key, the Web and productivity are indivisible, and making data and analytics into a self-service amenity is the way to go.  But perhaps most of all, features are still important, as long as they get you through your day faster, rather than adding complexity for its own sake.  I would argue that this is true for just about every product Microsoft makes: users want utility, not complexity.

    Read the article

  • Does ImageIO read imply anti-aliased scaling?

    - by tigger
    I've replaced the Java internal ImageFetcher with an own implementation using ImageIO. Some image renderers of our software, which use these images, now draw anti-aliased scaled images instead of non anti-aliased. The only change is the source of the image, which are now BufferedImages instead of Toolkit-Images. The question now is, where is the difference? Which property causes the images to scale anti-aliased? I've always thought that the anti-alias key ONLY depends on the graphics I paint on - but this is obviously wrong. By the way: unfortunately I cannot change the renderers.

    Read the article

< Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >