Search Results

Search found 9062 results on 363 pages for 'big empin'.

Page 14/363 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Big and Little endian question

    - by Bobby
    I have the following code: // Incrementer datastores.cmtDatastores.u32Region[0] += 1; // Decrementer datastores.cmtDatastores.u32Region[1] = (datastores.cmtDatastores.u32Region[1] == 0) ? 10 : datastores.cmtDatastores.u32Region[1] - 1; // Toggler datastores.cmtDatastores.u32Region[2] = (datastores.cmtDatastores.u32Region[2] == 0x0000) ? 0xFFFF : 0x0000; The u32Region array is an unsigned int array that is part of a struct. Later in the code I convert this array to Big endian format: unsigned long *swapL = (unsigned long*)&datastores.cmtDatastores.u32Region[50]; for (int i=0;i<50;i++) { swapL[i] = _byteswap_ulong(swapL[i]); } This entire code snippet is part of a loop that repeats indefinitely. It is a contrived program that increments one element, decrements another and toggles a third element. The array is then sent via TCP to another machine that unpacks this data. The first loop works fine. After that, since the data is in big endian format, when I "increment", "decrement", and "toggle", the values are incorrect. Obviously, if in the first loop datastores.cmtDatastores.u32Region[0] += 1; results in 1, the second loop it should be 2, but it's not. It is adding the number 1(little endian) to the number in datastores.cmtDatastores.u32Region[0](big endian). I guess I have to revert back to little endian at the start of every loop, but it appears there should be an easier way to do this. Any thoughts? Thanks, Bobby

    Read the article

  • Big problem with fluent nhibernate, c# and MySQL need to search in BLOB

    - by VinnyG
    I've done a big mistake, now I have to find a solution. It was my first project working with fluent nhibernate, I mapped an object this way : public PosteCandidateMap() { Id(x => x.Id); Map(x => x.Candidate); Map(x => x.Status); Map(x => x.Poste); Map(x => x.MatchPossibility); Map(x => x.ModificationDate); } So the whole Poste object is in the database but I would have need only the PosteId. Now I got to find all Candidates for one Poste so when I look in my repository I have : return GetAll().Where(x => x.Poste.Id == id).ToList(); But this is very slow since it loads all the items, we now have more than 1500 items in the table, at first to project was not supposed to be that big (not a big paycheck either). Now I'm trying to do this with criterion ou Linq but it's not working since my Poste is in a BLOB. Is there anyway I can change this easyly? Thanks a lot for the help!

    Read the article

  • F5 Networks iRule/Tcl - Escaping UNICODE 6-character escape sequences so they are processed as and r

    - by openid.malcolmgin.com
    We are trying to get an F5 BIG-IP LTM iRule working properly with SharePoint 2007 in an SSL termination role. This architecture offloads all of the SSL processing to the F5 and the F5 forwards interactive requests/responses to the SharePoint front end servers via HTTP only (over a secure network). For the purposes of this discussion, iRules are parsed by a Tcl interpretation engine on the F5 Networks BIG-IP device. As such, the F5 does two things to traffic passing through it: Redirects any request to port 80 (HTTP) to port 443 (HTTPS) through HTTP 302 redirects and URL rewriting. Rewrites any response to the browser to selectively rewrite URLs embedded within the HTML so that they go to port 443 (HTTPS). This prevents the 302 redirects from breaking DHTML generated by SharePoint. We've got part 1 working fine. The main problem with part 2 is that in the response rewrite because of XML namespaces and other similar issues, not ALL matches for "http:" can be changed to "https:". Some have to remain "http:". Additionally, some of the "http:" URLs are difficult in that they live in SharePoint-generated JavaScript and their slashes (i.e. "/") are actually represented in the HTML by the UNICODE 6-character string, "\u002f". For example, in the case of these tricky ones, the literal string in the outgoing HTML is: http:\u002f\u002fservername.company.com\u002f And should be changed to: https:\u002f\u002fservername.company.com\u002f Currently we can't even figure out how to get a match in a search/replace expression on these UNICODE sequence string literals. It seems that no matter how we slice it, the Tcl interpreter is interpreting the "\u002f" string into the "/" translation before it does anything else. We've tried various combinations of Tcl escaping methods we know about (mainly double-quotes and using an extra "\" to escape the "\" in the UNICODE string) but are looking for more methods, preferably ones that work. Does anyone have any ideas or any pointers to where we can effectively self-educate about this? Thanks very much in advance.

    Read the article

  • web2py or grok (zope) on a big portal,

    - by Robert
    Hi, I am planning to make some big project (1 000 000 users, approximately 500 request pre second - in hot time). For performance I'm going to use no relational dbms (each request could cost lot of instructions in relational dbms like mysql) - so i can't use DAL. My question is: how web2py is working with a big traffic, is it work concurrently? I'm consider to use web2py or Gork - Zope, How is working zodb(Z Object Database) with a lot of data? Is there some comparison with object-relational postgresql? Could you advice me please.

    Read the article

  • What's the next big thing after LINQ?

    - by Leniel Macaferi
    I started using LINQ (Language Integrated Query) when it was still in beta, more specifically Microsoft .NET LINQ Preview (May 2006). Almost 4 years have passed and here we are using LINQ in a lot of projects for the most diverse tasks. I even wrote my final college project based on LINQ. You see how I like it. LINQ and more recently PLINQ (Parallel LINQ) give our jobs a great boost when it comes to more programming power and less lines of code leading us to more expressive and readable code. I keep thinking what could be the next big language improvement for C# after LINQ. I know there are some promissing language features coming as Code Contracts, etc, but nothing having the impact that LINQ had. What do you think could be the next big thing?

    Read the article

  • how big should your controllers be in asp.net-mvc

    - by ooo
    i see the new feature of areas in asp.net-mvc 2. it got me thinking. why would i need this? i did some reading on the use cases and it came down to a specific point to me around how big and how broad scope should my controllers should be? should i try to have many little controllers? one big controller? how do people determine the sweet spot for number of controllers? i think mine are maybe too large (which had me questioning areas in the first place as maybe my controller name should really be an area and have a number of smaller controllers)

    Read the article

  • New to Rails. Doubt in Big URL Routing

    - by Gautam
    Hi, I have just started learning ruby on rails. I have a doubt wrt routing. Default Routing in Rails is :controller/:action/:id It works really fine for the example lets say example.com/publisher/author/book_name Could you tell me how do you work with something very big like this site http://www.telegraph.co.uk/sport/football/leagues/premierleague/chelsea/ Could you let me understand about the various controllers, actions, ids for the above mentioned url and how to code controller, models so as to achieve this. Could you suggest me some good tutorials when dealing with this big urls. Looking forward for your help Thanks in advance Gautam

    Read the article

  • Perl "Day too big" - root cause

    - by azp74
    I have been helping someone debug some code where the error message was "Day too big". I know that this springs from localtime and the Y2038 bug (most google results appear to be people dealing with cookies expiring well into the future). We appear to have 'fixed' the problem by using time to get the current date. However, given that none of our original dates should have hit the 2038 issue I'm sceptical that we've actually fixed the problem ... Are there other instances that anyone knows of where one would hit "day too big"?

    Read the article

  • Hy, problem mantaining big javascript code.

    - by Totty
    I have more than 1000 lines in a big jquery plugin, that is actually a big class, that inludes some others classes, but they have to be in the same file. I inlcude a piece of code. If you have another way to simplify the code.. The actual problem is that i have a gallery with a lot of things, is dynamic with smart ajax data loading so it requires a lot of classes to use it properly and to cache the data. (function($){ var TottysGallery = function(element, options, data){ var Core = new function(){...}; var Core2 = new function(){...}; var Core3 = new function(){...}; var Core = function(){...}; };

    Read the article

  • How Manage Big Linq DataContext ?

    - by Rev
    Hi The major problem in .net programs is "How manage memory for best performance". so Microsoft use garbage collector in .net and with that, we don't need to do something for managing memory(or better say we can use GC easily) But when you develop big project(business app), you make too many tables and database for your own project. so if you use Linq-to-sql, we must build DataContext include hundred or more tables. That make problem for program when you create an object from datacontext, that object give big amount of memory. also we cant divide datacontext to datacontexts(cuz relation between tables) so "How manage datacontext and memory"?

    Read the article

  • md5hash performance with big files for check copy files in shared folder

    - by alhambraeidos
    Hi all, My app Windows forms .NET in Win XP copy files pdfs in shared network folder in a server win 2003. Admin user in Win2003 detects some corrupt files pdfs, in that shared folder. I want check if a fileis copied right in shared folder Andre Krijen says me the best way is to create a MD5Hash of original file. When the file is copied, verify the MD5Hash file of the copied one with the original one. I have big pdf files. apply md5 hash about big file, any performance problem ?? If I only check (without generate md5 hash) Length of files (original and copied) ?? Thanks in advanced.

    Read the article

  • One big call vs. multiple smaller TSQL calls

    - by BrokeMyLegBiking
    I have a ADO.NET/TSQL performance question. We have two options in our application: 1) One big database call with multiple result sets, then in code step through each result set and populate my objects. This results in one round trip to the database. 2) Multiple small database calls. There is much more code reuse with Option 2 which is an advantage of that option. But I would like to get some input on what the performance cost is. Are two small round trips twice as slow as one big round trip to the database, or is it just a small, say 10% performance loss? We are using C# 3.5 and Sql Server 2008 with stored procedures and ADO.NET.

    Read the article

  • What to do with a big image that's slowing website loading down significantly

    - by Dave
    Hi I'm working on a website that's already been designed by someone else. The designer has used a big image (900x700 100KB) which contains a big logo right across the top, then the background for two columns. This image loads every time a page is loaded as it forms the basis for the website. What should I do with it to improve loading time? I'm considering splitting it up into two or more images, especially the logo on the top. Does splitting up images like that decrease loading time in any significant way? Thanks -edit: Also, all the images are .jpg, would changing this to .gif or .png help anything?

    Read the article

  • How is load balancing in big systems implemented?

    - by uther-lightbringer
    Hello, I'm wondering how is implemented load balancing in realy big applications like google or facebook. I know that in normal scenario there may be machine dedicated to this task, but I would like to know how is it resolved in realy big aplication with hundreds of thousans people accessing it in any given time. I am just wondering how exactly when one types google.com will that request find its way to concrete computer (are there multiple load balancers? and how is it set up and implemented that user's request will find the way to concrete balancer out of many others). I will realy appreciate if someone enlightens me this issue, thank you.

    Read the article

  • The big dude : server cost € , and the what 'i must look for' question .

    - by Angelus
    Hi again and sorry for the bad title . This time I'm thinking in a big project , and I have a big hole of acknowledge about servers and cost of them (economic cost). The big project consist in a new table game for playing online with bets. Think in it like a poker server that must have a good response to thousands of people at the same time. Then i have the big question , what type of server must i look for? , what features must i see in them? . ¿Must I think in cloud computing? thank you in advance.

    Read the article

  • Issue with resetting auto increment from default to big number

    - by Sai Srikanth
    I have a MySQL table naming Invoice for a Inventory Monitoring site, invoice_number is bigint(19) AUTO_INCREMENT field. Currently AUTO_INCREMENT value is 1. Client want it to start the invoice_number from 50000. With the following script reset the ALTER TABLE INVOICES AUTO_INCREMENT = 50000; When I wrote an Insert Script to insert data in SQLDBX, it is putting the invoice_number from 50000. But when i am trying to do insert a record using the application(web application), the invoice_number value is starting from 1. We are making use of Spring-JDBC template to insert data into mysql database.

    Read the article

  • Set up DNS or F5 VIP to send traffic to a specific port

    - by Sam
    I have a clustered SQL instance set up at SERVER01\dev08 It's assigned to a static port of 1466. Can I set up something which will let users connect to SERVER01 and hit that port? If this is possible, what problems might it create (all traffic coming to this name hitting only one port)? It seems that DNS has nothing to do with ports - nor does the F5 big IP.

    Read the article

  • Fast Data: Go Big. Go Fast.

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 For those of you who may have missed it, today’s second full day of Oracle OpenWorld 2012 started with a rumpus. Joe Tucci, from EMC outlined the human face of big data with real examples of how big data is transforming our world. And no not the usual tried-and-true weblog examples, but real stories about taxi cab drivers in Singapore using big data to better optimize their routes as well as folks just trying to get a better hair cut. Next we heard from Thomas Kurian who talked at length about the important platform characteristics of Oracle’s Cloud and more specifically Oracle’s expanded Cloud Services portfolio. Especially interesting to our integration customers are the messaging support for Oracle’s Cloud applications. What this means is that now Oracle’s Cloud applications have a lightweight integration fabric that on-premise applications can communicate to it via REST-APIs using Oracle SOA Suite. It’s an important element to our strategy at Oracle that supports this idea that whether your requirements are for private or public, Oracle has a solution in the Cloud for all of your applications and we give you more deployment choice than any vendor. If this wasn’t enough to get the juices flowing, later that morning we heard from Hasan Rizvi who outlined in his Fusion Middleware session the four most important enterprise imperatives: Social, Mobile, Cloud, and a brand new one: Fast Data. Today, Rizvi made an important step in the definition of this term to explain that he believes it’s a convergence of four essential technology elements: Event Processing for event filtering, business rules – with Oracle Event Processing Data Transformation and Loading - with Oracle Data Integrator Real-time replication and integration – with Oracle GoldenGate Analytics and data discovery – with Oracle Business Intelligence Each of these four elements can be considered (and architect-ed) together on a single integrated platform that can help customers integrate any type of data (structured, semi-structured) leveraging new styles of big data technologies (MapReduce, HDFS, Hive, NoSQL) to process more volume and variety of data at a faster velocity with greater results.  Fast data processing (and especially real-time) has always been our credo at Oracle with each one of these products in Fusion Middleware. For example, Oracle GoldenGate continues to be made even faster with the recent 11g R2 Release of Oracle GoldenGate which gives us some even greater optimization to Oracle Database with Integrated Capture, as well as some new heterogeneity capabilities. With Oracle Data Integrator with Big Data Connectors, we’re seeing much improved performance by running MapReduce transformations natively on Hadoop systems. And with Oracle Event Processing we’re seeing some remarkable performance with customers like NTT Docomo. Check out their upcoming session at Oracle OpenWorld on Wednesday to hear more how this customer is using Event processing and Big Data together. If you missed any of these sessions and keynotes, not to worry. There's on-demand versions available on the Oracle OpenWorld website. You can also checkout our upcoming webcast where we will outline some of these new breakthroughs in Data Integration technologies for Big Data, Cloud, and Real-time in more details. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • git/gitolite: big git repo with several mini projects

    - by Jay
    I'm pretty new to the whole version control thing, and even more so with git. I recently installed git on my computer(s) and set it up on a NAS server. However, I have several client folders with several project folders per client folder. Each one of these client folders is a giant repo, encompassing every project inside it. What I'm wondering is, is there a way to break this apart? So, for instance: The NAS is my 'origin', and has gitolite installed On computer1 I have every project folder in a client folder ever created (clean branch), In computer2 I do not a new checkout of the client branch (because all the projects in that branch are all completed and I don't need a working copy of it), but I do have a brand new project folder for that client "newproject". Is there a way to commit and push to the NAS repo from computer2? Or perhaps is there a better way of organizing all this?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >