Search Results

Search found 97400 results on 3896 pages for 'application data'.

Page 201/3896 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • passing data to php in ajax [closed]

    - by MertMETIN
    i try to pass my data, actually checkboxes has some datas in their ids, and i read try to read them. like that, <input align="right" type=checkbox name="checkArtist[]" class="checkClass" id ="{$movieId}-{$mCast.id}"></li> dont worry about {$movieId}-{$mCast.id} They are template lite tags. I successfully read checkboxes datas in ajax side. The problem is that i cannot send these datas to my php. var artistIds = new Array(); $(".p16 input:checked").each()(function(){ artistIds.push($(this).attr('id')); }); $.post('/json/crewonly/deleteDataAjax2', { 'artistIds': artistIds },function(response){ if(response=='ok') alert("ok"); }); above code is my ajax but in php side $artistIds is always empty. I found a topic in stackoverflow http://stackoverflow.com/questions/5571646/how-to-pass-a-javascript-array-via-jquery-post-so-that-all-its-contents-are-acce It says how to pass js arrays $.post('/url/to/page', {'someKeyName': variableName}); This is same with my code. What's going wrong ?

    Read the article

  • XNA - Saving Game Data without TypeWriter?

    - by Zabby
    In the past, I've had some trouble with reading and writing content for XNA. I think I'm firmly set on this tutorial which sets up a four project structure (Game, Content, Library, Pipeline Extension) for the solution that other websites suggest, too. The options it offers for content reading seem great. But problem! This tutorial (and all others I've found) have stated that the Content Pipeline Extension Project is not distributed with the game, which is fine in itself, but this is combined with the fact that any content writing objects are placed in this non-distributable library. The ability to actually write content of an already existing type (save game files, namely) is pretty critical to the project I'm trying to make. I've already learned the hard way you simply can't place the Content Pipeline reference in another project besides the extension for easier access to the intermediate deserializer. Is there another object I can access for writing save game data? Am I actually, despite the warnings of this tutorial, able to use the TypeWriter outside of the Content Pipeline Extension Project? Or is there a third option I am missing here?

    Read the article

  • Handling changes to data types and entries in a database migration

    - by jandjorgensen
    I'm fully redesigning a site that indexes a number of articles with basic search functionality. The previous site was written about a decade ago, and I'm salvaging about 30,000 entries with data stored in less-than-ideal formats. While I'm moving from MSSQL to MySQL, I don't need to make any "live" changes, so this is not a production-level migration issue so much as a redesign. For instance, dates are stored the same as tags/subjects about the articles, but in strings as "YYYYMMDDd" (the lowercase d stands for "date" in the string). Essentially, before or after I move from the previous database format to a new one, I'm going to need to do a lot of replacement of individual entries. While I understand how to do operations with regular expressions in non-database issues, my database experience isn't robust enough to know the best way to handle this. What is the best (or standard) way to handle major changes like this? Is there an SQL operation I should be looking into? Please let me know if the problem isn't clear--I'm not entirely sure what kind of answer I'm looking for.

    Read the article

  • SMART Status Data Interpretation - Disk Utility

    - by Mah
    Last week my external harddisk (Seagate Barracuda 1.5TB in a custom enclosure) showed signs of failure (Disk Utility SMART Pre-failure status - several bad sectors) and I decided to change it. I bought a new HDD (Seagate Barracuda 2TB) and connected it to my Ubuntu box with a SATA to USB cable that could not report SMART status. I copied all the contents of the old HDD to the new HDD (one partition with rsync, the other with parted cp) and then gently replaced the old HDD with the new one inside my aluminum enclosure. For obscure reasons after reconnecting the new HDD through the old enclosure, the Linux box could not detect my partitions. I recovered the partitions with testdisk and restarted the computer. After the restart I checked the SMART status of the new HDD an I get this: Read Error Rate --------------- Normalized 108 Worst 99 Threshold 6 Value 16737944 I got a high value on the Seek Error Rate as well. Wondering why this happens I copied 2 GB directory from one partition to the other and rechecked the SMART status (5 minutes later). This time I got the following: Read Error Rate --------------- Normalized 109 Worst 99 Threshold 6 Value 24792504 As you see there has been an increase in the error rate. I am unable to interpret these numbers. Is my new hard disk already dying? What are the acceptable values in these fields for Seagate hard disks? Then why the assessment is still good? While I could get temperature and airflow temperature data from my old HDD, I can not fetch them for the new one. I noticed that my old hdd had got really hot sometimes. Is it possible that the enclosure is killing the harddisks due to high temperature?... Thanks

    Read the article

  • Controlling what data populates STAR

    - by user10747017
    Beginning with the Primavera Reporting Database 2.2\P6 Analytics 1.2 release, the first release that supported the P6 Extended Schema, a new ability was added to filter which projects could be included during an ETL run. In previous releases, all projects were included in an ETL run. Additionally, all projects with the option to enable publication are included in the ETL run by default.Because the reporting needs for P6 Extended Schema are different from those of STAR, you can define a filter that will limit the data that is included in the STAR schema. For example, your STAR schema can be filter to only include all projects in a specific Portfolio, or all projects with a project code assignment of 'For Analytics.'  Any criteria that can be defined in a Where clause and added to a view can be used to filter the projects included in the STAR schema. I highly suggest this approach when dealing with large databases. Unnecessary projects could cause the Extract portion of the ETL process to take longer. A table in STAR called etl_projectlist is the key for what projects are targeted during the ETL process. To setup the filter, perform the following steps:1. Connect to your Primavera P6 Project Management Database as Pxrptuser (extended schema owner) and create a new view:create or replace view star_project_viewasselect PROJECTOBJECTID objectidfrom projectportfolio pp, projectprojectportfolio pppwhere pp.objectid = ppp.PROJECTPORTFOLIOOBJECTIDand pp.name = 'STAR Projects'--The main field that MUST be selected in the view is the projectobjectid. Selecting any other field besides the projectobjectid will cause the view to be invalid and will not work. Any Where clause can be used, but projectobjectid is the key.2. In your STAR installation directory go the \res folder and edit the staretl.properties file.  Here you will define the view to be used.  Add the following line or update if exists:star.project.filter.ds1=star_project_view3. When running the  staretl.cmd or staretl.sh process the database link to Pxrtpuser will be accessed and this view will be used to populate the etl_projectlist table  with the appropriate projectobjectids as defined in the view created in step 1 above.

    Read the article

  • What happens to remounted data/directories

    - by cauon
    According to suggestions in this post I am trying to improve my system to run better with a Solid State Drive. But regarding to RAMdisks and /etc/fstab usage I have some understanding problems coming up. So let's say I add the following lines to /etc/fstab tmpfs /tmp tmpfs defaults,noatime,nodiratime,mode=1777 0 0 tmpfs /var/spool tmpfs defaults,noatime,nodiratime,mode=1777 0 0 tmpfs /var/tmp tmpfs defaults,noatime,nodiratime,mode=1777 0 0 tmpfs /var/log tmpfs defaults,noatime,nodiratime,mode=0755 0 0 I know that on startup these locations should now get mounted into the RAM (hopefully). But what happens to the physical space that was mounted on those places before? Is it gone? Will it be back when I edit my /etc/fstab back to the Version without tmpfs? Will the space still be allocated on my SSD in a way that I can't use it for any other data? Sometimes it is suggested to add the following line, too: none /var/cache aufs dirs=/tmp:/var/cache=ro 0 0 What does this actually do? I noticed that /var/cache takes almost 1GB of space on my harddisk. So should i clear the directory before activating this line? (this is related to the former question) This causes me some confusions and I hope you can give me some clarifications. UPDATE I've downloaded a image with 600MB in size into /tmp that is mounted with the tmpfs settings above. Now I wanted to compare the RAM usage before and after the download. I expect the RAM usage to be increased by 600MB after the download. But the System Monitoring Tool showed me no changes at all. How can this be? Does tmpfs work other than I actually expect it to?

    Read the article

  • Better ways to have valuable data indexed, which is ignored currently

    - by Sam
    <a title="">.../a> Hi folks. It seems that my title tag which holds extremely valuable and describes contents on my simple design page is currently compeltely denied by search engines and not indexed at all!! Those descriptions should however be indexed as the describe valuable portions to an otherwise empty page with clean glossary (thats neat and organised to the eye of the viewer. So putting all that descriptive data into visible space would ruin the designish less is more fundamental... So, which alternatives to the title tag do I have, in order to put important contents that are relevant for both user as well as search engines? A <a name="">......</> B <p name="">......</> C <a alt="">.......</> D <p alt="">.......</> From the above list, arose my question: Which of the above is advisable alternative in order to get the valuable actual content indexed? Should it be in a a tag or p tag? Or are there even better tags for this which still keep layout clean? You suggestions are Much appreciated!

    Read the article

  • Converting raw data type to enumerated type

    - by Jim Lahman
    There are times when an enumerated type is preferred over using the raw data type.  An example of using a scheme is when we need to check the health of x-ray gauges in use on a production line.  Rather than using a scheme like 0, 1 and 2, we can use an enumerated type: 1: /// <summary> 2: /// POR Healthy status indicator 3: /// </summary> 4: /// <remarks>The healthy status is for each POR x-ray gauge; each has its own status.</remarks> 5: [Flags] 6: public enum POR_HEALTH : short 7: { 8: /// <summary> 9: /// POR1 healthy status indicator 10: /// </summary> 11: POR1 = 0, 12: /// <summary> 13: /// POR2 healthy status indicator 14: /// </summary> 15: POR2 = 1, 16: /// <summary> 17: /// Both POR1 and POR2 healthy status indicator 18: /// </summary> 19: BOTH = 2 20: } By using the [Flags] attribute, we are treating the enumerated type as a bit mask.  We can then use bitwise operations such as AND, OR, NOT etc. . Now, when we want to check the health of a specific gauge, we would rather use the name of the gauge than the numeric identity; it makes for better reading and programming practice. To translate the numeric identity to the enumerated value, we use the Parse method of Enum class: POR_HEALTH GaugeHealth = (POR_HEALTH) Enum.Parse(typeof(POR_HEALTH), XrayMsg.Gauge_ID.ToString()); The Parse method creates an instance of the enumerated type.  Now, we can use the name of the gauge rather than the numeric identity: 1: if (GaugeHealth == POR_HEALTH.POR1 || GaugeHealth == POR_HEALTH.BOTH) 2: { 3: XrayHealthyTag.Name = Properties.Settings.Default.POR1XRayHealthyTag; 4: } 5: else if (GaugeHealth == POR_HEALTH.POR2) 6: { 7: XrayHealthyTag.Name = Properties.Settings.Default.POR2XRayHealthyTag; 8: }

    Read the article

  • Ubuntu 12.04 suddenly will not boot after compiz crash on lockscreen

    - by schonjones
    Yesterday i went to work and closed my laptop. When i got home I couldn't get the lock screen to come back up. ctrl+alt+f2 to get to a terminal and Tried to kill compiz, compiz was not running so I tried to restart it got an error opening the display so i tried to restart lightdm again couldn't open display. so I restarted and now I am stuck on the loading screen. I tried booting in recovery mode and all options hang except root, but i'm stuck with read only. booting from a live cd I can mount the drive but again stuck with read only. I'm completely stuck here. checking my default-display-manager returns "/usr/sbin/elsa" as the only line and can't edit it. I've got an 11.10 live cd and my /home is encrypted so i can't try to back anything up at the moment either. How can i resolve this or at the least back up some of my data? Please help!

    Read the article

  • Most efficient way to handle coordinate maps in Java

    - by glowcoder
    I have a rectangular tile-based layout. It's your typical Cartesian system. I would like to have a single class that handles two lookup styles Get me the set of players at position X,Y Get me the position of player with key K My current implementation is this: class CoordinateMap<V> { Map<Long,Set<V>> coords2value; Map<V,Long> value2coords; // convert (int x, int y) to long key - this is tested, works for all values -1bil to +1bil // My map will NOT require more than 1 bil tiles from the origin :) private Long keyFor(int x, int y) { int kx = x + 1000000000; int ky = y + 1000000000; return (long)kx | (long)ky << 32; } // extract the x and y from the keys private int[] coordsFor(long k) { int x = (int)(k & 0xFFFFFFFF) - 1000000000; int y = (int)((k >>> 32) & 0xFFFFFFFF) - 1000000000; return new int[] { x,y }; } } From there, I proceed to have other methods that manipulate or access the two maps accordingly. My question is... is there a better way to do this? Sure, I've tested my class and it works fine. And sure, something inside tells me if I want to reference the data by two different keys, I need two different maps. But I can also bet I'm not the first to run into this scenario. Thanks!

    Read the article

  • Grouping a comma separated value on common data [closed]

    - by Ankit
    I have a table with col1 id int, col2 as varchar (comma separated values) and column 3 for assigning group to them. Table looks like col1 col2 group .............................. 1 2,3,4 2 5,6 3 1,2,5 4 7,8 5 11,3 6 22,8 This is only the sample of real data, now I have to assign a group no to them in such a way that output looks like col1 col2 group .............................. 1 2,3,4 1 2 5,6 1 3 1,2,5 1 4 7,8 2 5 11,3 1 6 22,8 2 The logic for assigning group no is that every similar comma separated value of string in col2 have to be same group no as every where in col2 where '2' is there it has to be same group no but the complication is that 2,3,4 are together so they all three int value if found in any where in col2 will be assigned same group. The major part is 2,3,4 and 1,2,5 both in col2 have 2 so all int 1,2,3,4,5 have to assign same group no. Tried store procedure with match against on col2 but not getting desired result Most imp (I can't use normalization, because I can't afford to make new table from my original table which have millions of records), even normalization is not helpful in my context. This question is also on stackoverflow with bounty on this link Achieved so far:- I have set the group column auto increment and then wrote this procedure:- BEGIN declare cil1_new,col2_new,group_new int; declare done tinyint default 0; declare group_new varchar(100); declare cur1 cursor for select col1,col2,`group` from company ; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done=1; open cur1; REPEAT fetch cur1 into col1_new,col2_new,group_new; update company set group=group_new where match(col2) against(concat("'",col2_new,"'")); until done end repeat; close cur1; select * from company; END This procedure is working, no syntax mistake but the problem is that I am not achieving the desired result exactly.

    Read the article

  • Processing component pools problem - Entity Subsystem

    - by mani3xis
    Architecture description I'm creating (designing) an entity system and I ran into many problems. I'm trying to keep it Data-Oriented and efficient as much as possible. My components are POD structures (array of bytes to be precise) allocated in homogeneous pools. Each pool has a ComponentDescriptor - it just contains component name, field types and field names. Entity is just a pointer to array of components (where address acts like an entity ID). EntityPrototype contains entity name and array of component names. Finally Subsystem (System or Processor) which works on component pools. Actual problem The problem is that some components dependents on others (Model, Sprite, PhysicalBody, Animation depends on Transform component) which makes a lot of problems when it comes to processing them. For example, lets define some entities using [S]prite, [P]hysicalBody and [H]ealth: Tank: Transform, Sprite, PhysicalBody BgTree: Transform, Sprite House: Transform, Sprite, Health and create 4 Tanks, 5 BgTrees and 2 Houses and my pools will look like: TTTTTTTTTTT // Transform pool SSSSSSSSSSS // Sprite pool PPPP // PhysicalBody pool HH // Health component There is no way to process them using indices. I spend 3 days working on it and I still don't have any ideas. In previous designs TransformComponent was bound to the entity - but it wasn't a good idea. Can you give me some advices how to process them? Or maybe I should change the overall design? Maybe I should create pools of entites (pools of component pools) - but I guess it will be a nightmare for CPU caches. Thanks

    Read the article

  • Class hierarchy problem in this social network model

    - by Gerenuk
    I'm trying to design a class system for a social network data model - basically a link/object system. Now I have roughly the following structure (simplified and only relevant methods shown) class Data: "used to handle the data with mongodb" "can link, unlink data and also return other linked data" "is basically a proxy object that only stores _id and accesses mongodb on requests" "it looks like {_id: ..., _out: [id1, id2,...], _inc: [id3, id4, ...]}" def get_node(self, id) "create a new Data object from the underlying mongodb" "each data object can potentially create a reference object to new mongo data" "this is needed when the data returns the linked objects" class Node: """ this class proxies linking calls to .data it includes additional network logic operations whereas Data only contains a basic database solution """ def __init__(self, data): "the infrastructure realization is stored as composition by an included object data" "Node bascially proxies most calls to the infrastructure object data" def get_node(self, data): "creates a new object of class Object or Link depending on data" class Object(Node): "can have multiple connections to Link" class Link(Node): "has one 'in' and one 'out' connection to an Object" This system is working, however maybe wouldn't work outside Python. Note that after reading links Now I have two questions here: 1) I want to infrastructure of the data storage to be replacable. Earlier I had Data as a superclass of Node so that it provided the neccessary calls. But (without dirty Python tricks) you cannot replace the superclass dynamically. Is using composition therefore recommended? The drawback is that I have to proxy most calls (link, unlink etc). Any thoughts? 2) The class Node contains the common method .get_node which is used to built new Object or Link instances after reading out the data. Some attribute of data decided whether the object which is only stored by id should be instantiated as an Object or Link class. The problem here is that Node needs to know about Object and Link in advance, which seems dodgy. Do you see a different solution? Both Object and Link need to instantiate one of all possible types depending on what the find in their linked data. Are there any other ideas how to implement a flexible Object/Link structure where the underlying database storage is isolated?

    Read the article

  • Algorithm for optimal control on space ship using accelerometer input data

    - by mm24
    Does someone have a good algorithm for controlling a space ship in a vertical shooter game using acceleration data? I have done a simple algorithm, but works very badly. I save an initial acceleration value (used to calibrate the movement according to the user's initial position) and I do subtract it from the current acceleration so I get a "calibrated" value. The problem is that basing the movement solely on relative acceleration has an effect of loss of sensitivity: certain movements are independent from the initial position. Would anyone be able to share a a better solution? I am wondering if I should use/integrate also inputs from gyroscope hardware. Here is my sample of code for a Cocos2d iOS game: - (void) accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration { if (calibrationLayer.visible){ [self evaluateCalibration:acceleration]; initialAccelleration=acceleration; return; } if([self evaluatePause]){ return; } ShooterScene * shooterScene = (ShooterScene *) [self parent]; ShipEntity *playerSprite = [shooterScene playerShip]; float accellerationtSensitivity = 0.5f; UIAccelerationValue xAccelleration = acceleration.x - initialAccelleration.x; UIAccelerationValue yAccelleration = acceleration.y - initialAccelleration.y; if(xAccelleration > 0.05 || xAccelleration < -0.05) { [playerSprite setPosition:CGPointMake(playerSprite.position.x + xAccelleration * 80, playerSprite.position.y + yAccelleration * 80)]; } else if(yAccelleration > 0.05 || yAccelleration < -0.05) { [playerSprite setPosition:CGPointMake(playerSprite.position.x + xAccelleration * 80, playerSprite.position.y + yAccelleration * 80)]; } }

    Read the article

  • First Experience with Web Services

    When I first started programming with Microsoft .Net (1.0 Framework) I had a strong desire to learn how search engines indexed web sites. At that time I was a working as a search engine spammer creating web pages to generate traffic for specific themes for various clients. One way I attempted to better understand .Net was to build a web spider that analyzed web pages on demand. An example of the spider is hosted at AddLinkz.com. After my spider was built I had no real idea what I could/should do with it until I found the MSN Search API. I used this web service to compare its results with my spider. Additionally, I used the API to feed my .Net web spider new URLs from the API based on specific search terms. MSN’s search API was very easy to use, I just had to request information by calling a web URL with parameters via a Get request and the results were returned in XML. At that time all requests were limited to XML responses and a maximum of 1,000 results per query.   Since then the entire API has gone through several reconstructions, rebranding and new search services.  Microsoft’s new Bing API replaced the older MSN search API and added several new search capabilities. These new features allow search data to be returned for web searches, image searches, new searches, and related search terms to name a few. Bing API Version 2.0 SourceTypes Web Searches for web content Sushi Image Searches for images on the web Sushi News Searches news stories Sushi InstantAnswer Searches Encarta online what is sushi, convert 5 feet to meters, x*5=7, and 2 plus 2 Spell Searches Encarta dictionary for spelling suggestions Phonebook Searches phonebook entries sushi in Los Angeles RelatedSearch Returns the query strings most similar to yours Ad Returns advertisements to incorporate with results I currently plan to start using the web search feature from the new Bing 2.0 API in an open source project related to exception management. Currently, it is still in the conception phase.

    Read the article

  • Validating data to nest if or not within try and catch

    - by Skippy
    I am validating data, in this case I want one of three ints. I am asking this question, as it is the fundamental principle I'm interested in. This is a basic example, but I am developing best practices now, so when things become more complicated later, I am better equipped to manage them. Is it preferable to have the try and catch followed by the condition: public static int getProcType() { try { procType = getIntInput("Enter procedure type -\n" + " 1 for Exploratory,\n" + " 2 for Reconstructive, \n" + "3 for Follow up: \n"); } catch (NumberFormatException ex) { System.out.println("Error! Enter a valid option!"); getProcType(); } if (procType == 1 || procType == 2 || procType == 3) { hrlyRate = hrlyRate(procType); procedure = procedure(procType); } else { System.out.println("Error! Enter a valid option!"); getProcType(); } return procType; } Or is it better to put the if within the try and catch? public static int getProcType() { try { procType = getIntInput("Enter procedure type -\n" + " 1 for Exploratory,\n" + " 2 for Reconstructive, \n" + "3 for Follow up: \n"); if (procType == 1 || procType == 2 || procType == 3) { hrlyRate = hrlyRate(procType); procedure = procedure(procType); } else { System.out.println("Error! Enter a valid option!"); getProcType(); } } catch (NumberFormatException ex) { System.out.println("Error! Enter a valid option!"); getProcType(); } return procType; } I am thinking the if within the try, may be quicker, but also may be clumsy. Which would be better, as my programming becomes more advanced?

    Read the article

  • How to Rotate different data in days of the week in php [migrated]

    - by shihon
    I am working on a project in which i have to distribute different ad's per day, the ad's in form of array are: $ad = array( 'attribute1_value' => "12", 'attribute2_value' => "xyz", 'attribute3_value' => 'http://example.com', 'attribute4_value' => 'data'); The logic i am using with switch case : $day = date('w',time()); switch ($day) { case '0': if($day == '0') { $count = 0; echo $ad; $count++; } else { $count = 7; echo $ad; } break; case '1': if($day == '1') { $count = 1; echo $ad; $count++; } else { $count = 8; echo $ad; } break; Problem is if i have ~15 ad's then i want to distribute ad/day, date('w') output's the present day but after day 7 i.e saturday, on sunday ad number 8 initiate. I have to implement this scenario using date function. Also i have to send ad's to those user who are not experience this ad before. I am not expert in php, as a beginner working in php/mysql. Kindly help me to improve this concept

    Read the article

  • Perfomance of 8 bit operations on 64 bit architechture

    - by wobbily_col
    I am usually a Python / Database programmer, and I am considering using C for a problem. I have a set of sequences, 8 characters long with 4 possible characters. My problem involves combining sets of these sequences and filtering which sets match a criteria. The combinations of 5 run into billions of rows and takes around an hour to run. So I can represent each sequence as 2 bytes. If I am working on a 64 bit architechture will I gain any advantage by keeping these data structures as 2 bytes when I generate the combinations, or will I be as well storing them as 8 bytes / double ? (64 bit = 8 x 8) If I am on a 64 bit architecture, all registers will be 64 bit, so in terms of operations that shouldn´t be any faster (please correct me if I am wrong). Will I gain anything from the smaller storage requirements - can I fit more combinations in memory, or will they all take up 64 bits anyway? And finally, am I likley to gain anything coding in C. I have a first version, which stores the sequence as a small int in a MySQL database. It then self joins the tabe to itself a number of times in order to generate all the possible combinations. The performance is acceptable, depending on how many combinations are generated. I assume the database must involve some overhead.

    Read the article

  • Process Improvement and the Data Professional

    - by BuckWoody
    Don’t be afraid of that title – I’m not talking about Six Sigma or anything super-formal here. In many organizations, there are more folks in other IT roles than in the Data Professional area. In other words, there are more developers, system administrators and so on than there are the “DBA” role. That means we often have more to do than the time we need to do it. And, oddly enough, the first thing that is sacrificed is process improvement – the little things we need to do to make the day go faster in the first place. Then we get even more behind, the work piles up and…well, you know all about that. Earlier I challenged you to find 10-30 minutes a day to study. Some folks wrote back and asked “where do I start”? Well, why not be super-efficient and combine that time with learning how to make yourself more efficient? Try out a new scripting language, learn a new tool that automates things or find out ways others have automated their systems. In general, find out what you’re doing and how, and then see if that can be improved. It’s kind of like doing a performance tuning gig on yourself! If you’re pressed for time, look for bite-sized articles (like the ones I’ve done here for PowerShell and SQL Server) that you can follow in a “serial” fashion. In a short time you’ll have a new set of knowledge you can use to make your day faster. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • What's the proper term for a function inverse to a constructor? Deconstructor, destructor, or something else?

    - by Petr Pudlák
    Edit: I'm rephrasing the question a bit. Apparently I caused some confusion because I didn't realize that the term destructor is used in OOP for something quite different - it's a function invoked when an object is being destroyed. In functional programming we (try to) avoid mutable state so there is no such equivalent to it. (I added the proper tag to the question.) Instead, I've seen that the record field for unwrapping a value (especially for single-valued data types such as newtypes) is sometimes called destructor or perhaps deconstructor. For example, let's have (in Haskell): newtype Wrap = Wrap { unwrap :: Int } Here Wrap is the constructor and unwrap is what? I've seen both, for example: ... Most often, one supplies smart constructors and destructors for these to ease working with them. ... at Haskell wiki, or ... The general theme here is to fuse constructor - deconstructor pairs like ... at Haskell wikibook (here it's probably meant in a bit more general sense). The questions are: How do we call unwrap in functional programming? Deconstructor? Destructor? Or by some other term? And to clarify, is this terminology applicable to other functional languages, or is it used just in the Has

    Read the article

  • Client-Server MMOG & data structures sync when joining / playing

    - by plang
    After reading a few articles on MMOG architecture, there is still one point on which I cannot find much information: it has to do with how you keep in sync server data on the client, when you join, and while you play. A pretty vague question, I agree. Let me refine it: Let's say we have an MMOG virtual world subdivided into geographical cells. A player in a cell is mostly interested in what happens in the cell itself, and all the surrounding cells, not more. When joining the game for the first time, the only thing we can do is send some sort of "database dump" of the interesting cells to the client. When playing, I guess it would be very inefficient to do the same thing regularly. I imagine the best thing to do is to send "deltas" to the client, which would allow keeping the local database in sync. Now let's say the player moves, and arrives in another cell. Surrounding cells change, and for all the new cells the player subscribes, the same technique as used when joining the game has to be used: some sort of "database dump". This mechanic of joining/moving in a cell-based MMOG virtual world interests me, and I was wondering if there were tried and tested techniques in this domain. Thanks!

    Read the article

  • Ninject/DI: How to correctly pass initialisation data to injected type at runtime

    - by MrLane
    I have the following two classes: public class StoreService : IStoreService { private IEmailService _emailService; public StoreService(IEmailService emailService) { _emailService = emailService; } } public class EmailService : IEmailService { } Using Ninject I can set up bindings no problem to get it to inject a concrete implementation of IEmailService into the StoreService constructor. StoreService is actually injected into the code behind of an ASP.NET WebForm as so: [Ninject.Inject] public IStoreService StoreService { get; set; } But now I need to change EmailService to accept an object that contains SMTP related settings (that are pulled from the ApplicationSettings of the Web.config). So I changed EmailService to now look like this: public class EmailService : IEmailService { private SMTPSettings _smtpSettings; public void SetSMTPSettings(SMTPSettings smtpSettings) { _smtpSettings = smtpSettings; } } Setting SMTPSettings in this way also requires it to be passed into StoreService (via another public method). This has to be done in the Page_Load method in the WebForms code behind (I only have access to the Settings class in the UI layer). With manual/poor mans DI I could pass SMTPSettings directly into the constructor of EmailService and then inject EmailService into the StoreService constructor. With Ninject I don't have access to the instances of injected types outside of the objects they are injected to, so I have to set their data AFTER Ninject has already injected them via a separate public setter method. This to me seems wrong. How should I really be solving this scenario?

    Read the article

  • Count function on tree structure (non-binary)

    - by Spevy
    I am implementing a tree Data structure in c# based (largely on Dan Vanderboom's Generic implementation). I am now considering approach on handling a Count property which Dan does not implement. The obvious and easy way would be to use a recursive call which Traverses the tree happily adding up nodes (or iteratively traversing the tree with a Queue and counting nodes if you prefer). It just seems expensive. (I also may want to lazy load some of my nodes down the road). I could maintain a count at the root node. All children would traverse up to and/or hold a reference to the root, and update a internally settable count property on changes. This would push the iteration problem to when ever I want to break off a branch or clear all children below a given node. Generally less expensive, and puts the heavy lifting what I think will be less frequently called functions. Seems a little brute force, and that usually means exception cases I haven't thought of yet, or bugs if you prefer. Does anyone have an example of an implementation which keeps a count for an Unbalanced and/or non-binary tree structure rather than counting on the fly? Don't worry about the lazy load, or language. I am sure I can adjust the example to fit my specific needs. EDIT: I am curious about an example, rather than instructions or discussion. I know this is not technically difficult...

    Read the article

  • Moving from a static site to a CMS with new URLs and meta-data for pages

    - by Chris J
    Hi I am in the process of rebuilding a site from static pages to a CMS which will be using mod_rewrite to generate new page URLs. In this process our marketing people and myself have decided to tidy up the descriptions, keywords and titles. Eg: a page which who's URL is currently "website-name/about_us.html" and has a title of "website-name - something not quite page specific" will change to "website-name/about-us/" and title: "about us - website-name" and may have a few keywords and the description changed. Our goal with updating the meta data is to improve our page rankings and try to keep in line with some best practices for SEO. Though our current page rankings are quite good in many aspects, there is room for improvement. All of the pages will also have content changes (like rearranging heading tags, new menu on all pages, new content in footer, extra pieces of dynamic content relating to other pages). In this new site process I plan to use 301 redirects for all the old URLs pointing to the new URLs. My question is what can I expect to happen to the page rankings in Google, in the sort term and long term? Will this be like kicking off a new site which will have to build up trust over time or will the original page rankings have affect?

    Read the article

  • Skynet Big Data Demo Using Hexbug Spider Robot, Raspberry Pi, and Java SE Embedded (Part 3)

    - by hinkmond
    In Part 2, I described what connections you need to make for this demo using a Hexbug Spider Robot, a Raspberry Pi, and Java SE Embedded for programming. Here are some photos of me doing the soldering. Software engineers should not be afraid of a little soldering work. It's all good. See: Skynet Big Data Demo (Part 2) One thing to watch out for when you open the remote is that there may be some glue covering the contact points. Make sure to use an Exacto knife or small screwdriver to scrape away any glue or non-conductive material covering each place where you need to solder. And after you are done with your soldering and you gave the solder enough time to cool, make sure all your connections are marked so that you know which wire goes where. Give each wire a very light tug to make sure it is soldered correctly and is making good contact. There are lots of videos on the Web to help you if this is your first time soldering. Check out Laday Ada's (from adafruit.com) links on how to solder if you need some additional help: http://www.ladyada.net/learn/soldering/thm.html If everything looks good, zip everything back up and meet back here for how to connect these wires to your Raspberry Pi. That will be it for the hardware part of this project. See, that wasn't so bad. Hinkmond

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >