Search Results

Search found 21331 results on 854 pages for 'require once'.

Page 617/854 | < Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >

  • How to generate SPMetal for a specific list (OOTB: like tasks or contacts) with custom columns

    - by KunaalKapoor
    SPMetal is used to make use of LINQ on a list in SharePoint 2010. By default when you generate SPMetal on a site you will get a code generated file for most of the lists and probably more. Here is a MSDN link for some info on SPMetal.http://msdn.microsoft.com/en-us/library/ee538255(office.14).aspxBut what if you want only to generate the code for one list?Well it is quite simple once you figure it out. You need to add an xml file to override the default settings of SPMetal and specify it in the /parameters option. I will show you how to do this.First create a Folder that will contain two files (GenerateSPMetalCode.bat and SPMetal.xml).Below is the content of the files:GenerateSPMetalCode.bat "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\BIN\SPMetal" /web:http://YourServer /code:OutPutFileName.cs /language:csharp /parameters:SPMetal.xml pause SPMetal.xml <?xml version="1.0" encoding="utf-8"?> <Web AccessModifier="Internal" xmlns="http://schemas.microsoft.com/SharePoint/2009/spmetal"> <List Name="ListName"> <ContentType Name="ContentTypeName" Class="GeneratedClassName" /> </List> <ExcludeOtherLists></ExcludeOtherLists> </Web> You will have to change some of the text in the files so that it will be specific to your SharePoint Server Setup. In the bat file you will have to change http://YourServer to the url of the web where your list is. In the SPMetal.xml file you need to change ListName to the name of your list and the ContentTypeName to the name of the content type you want to extract. The GeneratedClassName can be anything but perhaps you should rename it to something more sensible.Adding the following line: '<List Name="ListName"><ContentType Name="ContentTypeName" Class="GeneratedClassName" /> </List>'  makes sure that any custom columns added to an OOTB list like contacts or tasks are also generated, which are missed out in a regular generation.So now when you run it the SPMetal command will read the SPMetal.xml list and override its commands. ExcludeOtherLists element makes it so that only the code for the lists you specify will be generated. For some reason I got an error if I had this element above the List element.You sould now have a code file called OutPutFileName.cs that has been generated. You can now put this in your SharePoint project for use with your LINQ queries against that list.I will soon write a LINQ example that uses the generated class. UPDATE: Add the /namespace parameter to add a namespace to the generated code. "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\BIN\SPMetal" /web:http://YourServer /namespace:MySPMetalNameSpace /code:OutPutFileName.cs /language:csharp /parameters:SPMetal.xml

    Read the article

  • Movement prediction for non-shooters

    - by ShadowChaser
    I'm working on an isometric (2D) game with moderate-scale multiplayer - 20-30 players. I've had some difficulty getting a good movement prediction implementation in place. Right now, clients are authoritative for their own position. The server performs validation and broad-scale cheat detection, and I fully realize that the system will never be fully robust against cheating. However, the performance and implementation tradeoffs work well for me right now. Given that I'm dealing with sprite graphics, the game has 8 defined directions rather than free movement. Whenever the player changes their direction or speed (walk, run, stop), a "true" 3D velocity is set on the entity and a packet it sent to the server with the new movement state. In addition, every 250ms additional packets are transmitted with the player's current position for state updates on the server as well as for client prediction. After the server validates the packet, it gets automatically distributed to all of the other "nearby" players. Client-side, all entities with non-zero velocity (ie/ moving entities) are tracked and updated by a rudimentary "physics" system - basically nothing more than changing the position by the velocity according to the elapsed time slice (40ms or so). What I'm struggling with is how to implement clean movement prediction. I have the nagging suspicion that I've made a design mistake somewhere. I've been over the Unreal, Half-life, and all other movement prediction/lag compensation articles I could find, but they all seam geared toward shooters: "Don't send each control change, send updates every 120ms, server is authoritative, client predicts, etc". Unfortunately, that style of design won't work well for me - there's no 3D environment so each individual state change is important. 1) Most of the samples I saw tightly couple movement prediction right into the entities themselves. For example, storing the previous state along with the current state. I'd like to avoid that and keep entities with their "current state" only. Is there a better way to handle this? 2) What should happen when the player stops? I can't interpolate to the correct position, since they might need to walk backwards or another strange direction if their position is too far ahead. 3) What should happen when entities collide? If the current player collides with something, the answer is simple - just stop the player from moving. But what happens if two entities take up the same space on the server? What if the local prediction causes a remote entity to collide with the player or another entity - do I stop them as well? If the prediction had the misfortune of sticking them in front of a wall that the player has gone around, the prediction will never be able to compensate and once the error gets to high the entity will snap to the new position.

    Read the article

  • Capture a Query Executed By An Application Or User Against a SQL Server Database in Less Than a Minute

    - by Compudicted
    At times a Database Administrator, or even a developer is required to wear a spy’s hat. This necessity oftentimes is dictated by a need to take a glimpse into a black-box application for reasons varying from a performance issue to an unauthorized access to data or resources, or as in my most recent case, a closed source custom application that was abandoned by a deserted contractor without source code. It may not be news or unknown to most IT people that SQL Server has always provided means of back-door access to everything connecting to its database. This indispensible tool is SQL Server Profiler. This “gem” is always quietly sitting in the Start – Programs – SQL Server <product version> – Performance Tools folder (yes, it is for performance analysis mostly, but not limited to) ready to help you! So, to the action, let’s start it up. Once ready click on the File – New Trace button, or using Ctrl-N with your keyboard. The standard connection dialog you have seen in SSMS comes up where you connect the standard way: One side note here, you will be able to connect only if your account belongs to the sysadmin or alter trace fixed server role. Upon a successful connection you must be able to see this initial dialog: At this stage I will give a hint: you will have a wide variety of predefined templates: But to shorten your time to results you would need to opt for using the TSQL_Grouped template. Now you need to set it up. In some cases, you will know the principal’s login name (account) that needs to be monitored in advance, and in some (like in mine), you will not. But it is VERY helpful to monitor just a particular account to minimize the amount of results returned. So if you know it you can already go to the Event Section tab, then click the Column Filters button which would bring a dialog below where you key in the account being monitored without any mask (or whildcard):  If you do not know the principal name then you will need to poke around and look around for things like a config file where (typically!) the connection string is fully exposed. That was the case in my situation, an application had an app.config (XML) file with the connection string in it not encrypted: This made my endeavor very easy. So after I entered the account to monitor I clicked on Run button and also started my black-box application. Voilà, in a under a minute of time I had the SQL statement captured:

    Read the article

  • Remote Graphics Diagnostics with Windows RT 8.1 and Visual Studio 2013

    - by Michael B. McLaughlin
    Originally posted on: http://geekswithblogs.net/mikebmcl/archive/2013/11/12/remote-graphics-diagnostics-with-windows-rt-8.1-and-visual-studio.aspxThis blog post is a brief follow up to my What’s New in Graphics and Game Development in Visual Studio 2013 post on the MVP Award blog. While writing that post I was testing out various features to try to make sure everything worked as expected. I had some trouble getting Remote Graphics Diagnostics (a/k/a remote graphics debugging) working on my first generation Surface RT (upgraded to Windows RT 8.1). It was more strange since I could use remote debugging when doing CPU debugging; it was just graphics debugging that was causing trouble. After some discussions with the great folks who work on the graphics tools in Visual Studio, they were able to repro the problem and recommend a solution. My Surface RT needed the ARM Kits policy installed on it. Once I followed the instructions on the previous link, I could successfully use Remote Graphics Diagnostics on my Surface RT. Please note that this requires Windows RT 8.1 RTM (i.e. not Preview) and that Remote Graphics Diagnostics on ARM only works when you are using Visual Studio 2013 as it is a new feature (it should work just fine using the Express for Windows version). Also, when I installed the ARM Kits policy I needed to do two things to get it to work properly. First, when following the “How to install the Kits policy” instructions, I needed to copy the SecureBoot folder into Program Files on my Surface RT (specifically, I copied the SecureBoot folder to “C:\Program Files\Windows Kits\8.1\bin\arm\” on my Surface RT, creating any necessary directories). It may work if it’s in any system folder; I didn’t test any others after I got it working. I had initially put it in my Downloads folder and tried installing it from there. When the machine restarted it displayed a worrisome error message. I repeatedly pressed the button that would allow me to retry and eventually the machine rebooted and managed to recover itself to its previous state. Second, I needed to install it as an Administrator. The instructions say that this might be necessary. For me it was. This is a Remote Graphics Diagnostics is a great new feature in Visual Studio 2013 so I definitely encourage all of you to check it out!

    Read the article

  • Why are my scene's depth values not being written to my DepthStencilView?

    - by dotminic
    I'm rendering to a depth map in order to use it as a shader resource view, but when I sample the depth map in my shader, the red component has a value of 1 while all other channels have a value of 0. The Texture2D I use to create the DepthStencilView is bound with the D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE flags, the DepthStencilView has the DXGI_FORMAT_D32_FLOAT format, and the ShaderResourceView's format is D3D11_SRV_DIMENSION_TEXTURE2D. I'm setting the depth map render target, then i'm drawing my scene, and once that is done, I'm the back buffer render target and depth stencil are set on the output merger, and I'm using the depth map shader resource view as a texture in my shader, but the depth value in the red channel is constantly 1. I'm not getting any runtime errors from D3D, and no compile time warning or anything. I'm not sure what I'm missing here at all. I have the impression the depth value is always being set to 1. I have not set any depth/stencil states, and AFAICT depth writing is enabled by default. The geometry is being rendered correctly so I'm pretty sure depth writing is enabled. The device is created with the appropriate debug flags; #if defined(DEBUG) || defined(_DEBUG) deviceFlags |= D3D11_CREATE_DEVICE_DEBUG | D3D11_RLDO_DETAIL; #endif This is how I create my depth map. I've omitted error checking for the sake of brevity D3D11_TEXTURE2D_DESC td; td.Width = width; td.Height = height; td.MipLevels = 1; td.ArraySize = 1; td.Format = DXGI_FORMAT_R32_TYPELESS; td.SampleDesc.Count = 1; td.SampleDesc.Quality = 0; td.Usage = D3D11_USAGE_DEFAULT; td.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE; td.CPUAccessFlags = 0; td.MiscFlags = 0; _device->CreateTexture2D(&texDesc, 0, &this->_depthMap); D3D11_DEPTH_STENCIL_VIEW_DESC dsvd; ZeroMemory(&dsvd, sizeof(dsvd)); dsvd.Format = DXGI_FORMAT_D32_FLOAT; dsvd.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D; dsvd.Texture2D.MipSlice = 0; _device->CreateDepthStencilView(this->_depthMap, &dsvd, &this->_dmapDSV); D3D11_SHADER_RESOURCE_VIEW_DESC srvd; srvd.Format = DXGI_FORMAT_R32_FLOAT; srvd.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D; srvd.Texture2D.MipLevels = texDesc.MipLevels; srvd.Texture2D.MostDetailedMip = 0; _device->CreateShaderResourceView(this->_depthMap, &srvd, &this->_dmapSRV);

    Read the article

  • Code Design question, circular reference across classes?

    - by dsollen
    I have no code here, as this is more of a design question (I assume this is still the best place to ask it). I have a very simple server in java which stores a mapping between certain values and UUID which are to be used by many systems across multiple platforms. It accepts a connection from a client and creates a clientSocket which stores the socket and all the other relevant data unique to that connection. Each clientSocket will run in their own thread and will block on the socket waiting for a read. I expect very little strain on this system, it will rarely get called, but when it does get a call it will need to respond quickly and due to the risk of it having a peak time with multiple calls coming in at once threaded is still better. Each thread has a reference to a Mapper class which stores the mapping of UUID which it's reporting to others (with proper synchronization of course). This all works until I have to add a new UUID to the list. When this happens I want to report to all clients that care about that particular UUID that a new one was added. I can't multicast (limitation of the system I'm running on) so I'm having each socket send the message to the client through the established socket. However, since each thread only knows about the socket it's waiting on I didn't have a clear method of looking up every thread/socket that cares about the data to inform them of the new UUID. Polling is out mostly because it seems a little too convoluted to try to maintain a list of newly added UUID. My solution as of now is to have the 'parent' class which creates the mapper class and spawns all the threads pass itself as an argument to the mapper. Then when the mapper creates a new UUID it can make a call to the parent class telling it to send out updates to all the other sockets that care about the change. I'm concerned that this may be a bad design due to the use of a circular reference; parent has a reference to mapper (to pass it to new ClientSocket threads) and mapper points to parent. It doesn't really feel like a bad design to me but I wanted to check since circular references are suppose to be bad. Note: I realize this means that the thread associated with whatever socket originally received the request that spawned the creation of a UUID is going to pay the 'cost' of outputting to all the other clients that care about the new UUID. I don't care about this; as I said I suspect the client to receive only intermittent messages. It's unlikely for one socket to receive multiple messages at one time, and there won't be that many sockets so it shouldn't take too long to send messages to each of them. Perhaps later I'll fix the fact that I'm saddling higher work load on whatever unfortunate thread gets the first request; but for now I think it's fine.

    Read the article

  • Understanding the SQL Server 2008 R2 Installation Center

    - by Enrique Lima
    What is available to us through those links?  Have you taken the time to explore and identify what could be useful to you? One of many gems that has come to my attention is the possibility of provisioning SQL Server to work in an image based environment (hint: Virtualization Template perhaps?!?).   Planning: Includes requirements information, documentation, how to guides, online documentation installation and other tools. Among the other tools you will find the System Configuration checker and The Upgrade Advisor. Both tools very important to ensure your deployment and installation would be successful.     Installation:  This sections focuses on getting installations going, from standalone to cluster when it comes to new instances.  Add new nodes to an existing cluster, and also perform upgrades (in this case to SQL Server 2008 R2).  Also part of this is the option to find updates available.   Maintenance: We find in this section, options that will assist us in tasks like repairing corrupt installations to removing nodes from a cluster. An option that is interesting (and we should discuss benefits later in another post) is to be able to do an Edition Upgrade, this is a feature expansion and addition based on your product installation (Developer to Enterprise, for example)   Tools:  From the System Configuration Checker to identify readiness for deployment in a successful manner, to being able to report on features installed.  And being able to run upgrades of existing packages developed in the 2005 offering to the 2008 R2 release for SSIS.   Resources: Useful and essential links to gather information and guidance.   Advanced: Here is where it gets interesting.  I break this down into 3 main groups: Installation Automation: When you install SQL Server there is a configuration file that gets dropped (ConfigurationFile.ini) that would allow for you to perform automated installations.  There are switches and options that go with this to have that process working. Cluster configuration for Sysprep: Create images that are cluster ready, 2 options, start the prep work, and then the complete once at the final destination. Stand-alone configuration for Sysprep:  Like the clustering counterpart, 2 options, prep and complete.  Giving you the option to create standard templates for your SQL Server deployments. I find it fitting that the 3 topics listed here should (and will) be additional topics I will discuss.   Options: Very clear and specific about what this means. Select the Processor Type or the Installation Media Root Path.

    Read the article

  • Ubuntu 12.4.1 failing in vm both Vbox and Vmware on new HP Envy 4t-1000

    - by Chas
    Brand new to Linux, getting frustrated trying to get an environment up with Ubuntu. My primary goal is to learn Linux and Apache/PHP development. I need to keep my Windows OS as main on my machine for work, so i'm trying to virtualize Ubuntu 12.4.1 without luck (many attempts). I have a new HP Envy 4t-1000 with 16gb ram, and 32 gb ssd caching with 500gb spindle hard drive. Graphics card is an Intel HD 3000 with AMD Radeon 7670M. With installing Ubuntu desktop in VBox, I'm getting this result: https://forums.virtualbox.org/viewtopic.php?f=6&t=51939 With VMware workstation 7 (patched), I complete the install of Ubuntu, it reboots, purple desktop briefly flashes then it drops to command line. I bought a beginning Ubuntu book, and it recommends trying to manually configure graphics if this happens. So I tried doing a safe boot holding shift - I get to the first screen (GRUB) loads fine, and I choose recovery mode. After choosing the recovery mode, I get the recovery mode options, and can arrow down to what the book suggests 'Run in fail safe graphic mode.' Once I select this option, I get a black screen with a large white dialogue box, at the top it says "The system is running in low-graphics mode. Your screen, graphics card, and input device settings could not be detected correctly. You will need to configure these yourself." Then there is an ok button way down at the bottom. When I select 'ok' I get a menu for a few options, book recommended 'reconfigure graphics.' When I try this, I get a menu of two options: 1) "Use generic (default) configuration or 2) use backup. I've tried both options several times, hitting ok just refreshes screen and nothing more. Rebooting at this point just goes back to command line as before. I don't know what to do at this point, I've spent too many hours this weekend trying in both VBox and VMware to get Ubuntu going. Isn't there like a very basic graphic display or something I can use to at least get into the desktop? I explored the GRUB some more, and tried to look at the startup and xserver logs - both are blank. No help there I guess? When I try to choose 'Edit the configuration file, then 'ok' screen just refreshes on same menu options, nothing happens. thx for any advice. I really need to focus on learning Linux, Apache and PHP, so perhaps Ubuntu just won't work on my hardware? Any other suggestions? I will need to virtualize - THANKS for any help/advice.

    Read the article

  • What will be the better way for data retrieval on application that needs to handle limited amount of data.?

    - by Milanix
    This is not really a coding question since, I am not adding any code in here. Since, adding my code snippets itself would make this question really long. Instead, I am pretty interested in knowing a better ways for data retrieval on application that needs to handle limited amount of data which isn't updated regularly. Let's take this example: I am writing an application which gets a schedule as an XML from server. I have written a logic in order to parse XML version and update database only if the version is newer than the local version. Although the update is checked automatically/manually on daily basis based on user preference, the actual version update happens only once per few months or so. Since, this is done by some other authority which doesn't provide API but, rather inform publicly on their changes. The actual XML contains a "(n number of groups)(days in a week) (n number of schedule)" . The group is usually 6 and the number of schedule is usually 2. So basically there would usually be only around 100 strings. Now although I have used SQLite at the moment. I want to know how to make update on database. Should I show progress dialog that the application is updating and exit the app when it's done? Since, my updates are infrequent i don't think this will really harm user experience but, is there any better ways to do it? Because I don't want update to be made when user is searching which is done using database. This will cause an database already open exception. Atleast I have faced this problem before. Is it better to rather parse XML every time when user wants to view certain things or to use SQLite? Since, I make lots of use of adapter in my app to create lists, will that degrade the performance? It would really be a great help if anyone can give me better overview about it. Or may be counter argument against each. Many thanks!

    Read the article

  • Why is everything crashing?

    - by Kopkins
    I've been using Ubuntu for a while now and I love it, I wouldn't think of using another OS unless I can't fix this issue. The install I'm on is only around a month and a half old. I'm running 12.04 64bit on a 8,1 MBP. Up until around 2 weeks ago everything was running smoothly. Around then applications started crashing and weird things started happening. At first I thought it was just certain applications. The first thing to start giving me trouble was compiz. Occasionally compiz will stop decorating windows and lost many other functionalities. running compiz --replace fixes this, but I don't feel like doing it usually once a day. The other thing with this is that after running compiz --replace, my conky window gets lost somewhere and so I run killall conky && conky -c .conkyrc. But this isn't with just a couple applications, it seems like it is proliferating through my system. Last week fontforge started crashing while doing whatever task. So I ended up unable to finish what I was working on to completeness. Didn't find a fix. Today rhythmbox started crashing. Whenever I try to play anything, Rhythmbox becomes unresponsive and needs to be forced to close. When I try to do certain things with the disk utility, it crashes. I get the Ubuntu has experienced an internal error message much more often than I would like. Frequently applications stop appearing in the launcher. Wine almost never does anymore. After not being active for a little while, thunderbird can only fetch my mail after restarting wireless, sudo rmmod b43 && sudo modprobe b43 Occasionally some of my startup apps don't start. What is my best option here? Could they be bugs? I don't want to submit a ton of vague bug reports. Reinstall? switch OS? Thank you to anyone who responds. Kopkins

    Read the article

  • Separating logic and data in browser game

    - by Tesserex
    I've been thinking this over for days and I'm still not sure what to do. I'm trying to refactor a combat system in PHP (...sorry.) Here's what exists so far: There are two (so far) types of entities that can participate in combat. Let's just call them players and NPCs. Their data is already written pretty well. When involved in combat, these entities are wrapped with another object in the DB called a Combatant, which gives them information about the particular fight. They can be involved in multiple combats at once. I'm trying to write the logic engine for combat by having combatants injected into it. I want to be able to mock everything for testing. In order to separate logic and data, I want to have two interfaces / base classes, one being ICombatantData and the other ICombatantLogic. The two implementers of data will be one for the real objects stored in the database, and the other for my mock objects. I'm now running into uncertainties with designing the logic side of things. I can have one implementer for each of players and NPCs, but then I have an issue. A combatant needs to be able to return the entity that it wraps. Should this getter method be part of logic or data? I feel strongly that it should be in data, because the logic part is used for executing combat, and won't be available if someone is just looking up information about an upcoming fight. But the data classes only separate mock from DB, not player from NPC. If I try having two child classes of the DB data implementer, one for each entity type, then how do I architect that while keeping my mocks in the loop? Do I need some third interface like IEntityProvider that I inject into the data classes? Also with some of the ideas I've been considering, I feel like I'll have to put checks in place to make sure you don't mismatch things, like making the logic for an NPC accidentally wrap the data for a player. Does that make any sense? Is that a situation that would even be possible if the architecture is correct, or would the right design prohibit that completely so I don't need to check for it? If someone could help me just layout a class diagram or something for this it would help me a lot. Thanks. edit Also useful to note, the mock data class doesn't really need the Entity, since I'll just be specifying all the parameters like combat stats directly instead. So maybe that will affect the correct design.

    Read the article

  • Help writing server script to ban IP's from a list

    - by Chev_603
    I have a VPS that I use as an openvpn and web server. For some reason, my apache log files are filled with thousands of these hack attempts: "POST /xmlrpc.php HTTP/1.0" 404 395 These attack attempts fill up 90% of my logs. I think it's a WordPress vulnerability they're looking for. Obviously they are not successful (I don't even have Wordpress on my server), but it's annoying and probably resource consuming as well. I am trying to write a bash script that will do the following: Search the apache logs and grab the offending IP's (even if they try it once), Sort them into a list with each unique IP on a seperate line, And then block them using the IP table rules. I am a bash newb, and so far my script does everything except Step 3. I can manually block the IP's, but that's tedious and besides, this is Linux and it's perfectly capable of doing it for me. I also want the script to be customizable so that I (or anyone else who wants to use it) can change the variables to suit whatever situation I/they may deal with in the future. Here is the script so far: #!/bin/bash ##IP LIST GENERATOR ##Author Chev Young ##Script to search Apache logs and list IP's based on custom filters ## ##Define our variables: DIRECT=~/Script ##Location of script&where to put results/temp files LOGFILE=/var/log/apache2/access.log ## Logfile to search for offenders TEMPLIST=xml_temp ## Temporary file name IP_LIST=ipstoban ## Name of results file FILTER1=xmlrpc ## What are we looking for? (Requests we want to ban) cd $DIRECT if [ ! -f $TEMPLIST ];then touch $TEMPLIST ##Create temp file fi cat $LOGFILE | grep $FILTER1 >> $DIRECT/$TEMPLIST ## Only interested in the IP's, so: sed -e 's/\([0-9]\+\.[0-9]\+\.[0-9]\+\.[0-9]\+\).*$/\1/' -e t -e d $DIRECT/$TEMPLIST | sort | uniq > $DIRECT/$IP_LIST rm $TEMPLIST ## Clean temp file echo "Done. Results located at $DIRECT/$IP_LIST" So I need help with the next part of the script, which should ban the IP's (incoming and perhaps outgoing too) from the resulting $IP_LIST file. I don't care if it utilizes UFW or IPTables directly, as long as it bans the IP's. I'd probably run it as a cron task. What I'm having trouble with is understanding how to use line of the result file as a seperate variable to do something like: ufw deny $IP1 $IP2 $IP3, ect Any ideas? Thanks.

    Read the article

  • Strategy for avoiding duplicate object ids for data shared across devices using iCloud

    - by rmaddy
    I have a data intensive iOS app that is not using CoreData nor does it support iCloud synching (yet). All of my objects are created with unique keys. I use a simple long long initialized with the current time. Then as I need a new key I increment the value by 1. This has all worked well for a few years with the app running isolated on a single device. Now I want to add support for automatic data sync across devices using iCloud. As my app is written, there is the possibility that two objects created on two different devices could end up with the same key. I need to avoid this possibility. I'm looking for ideas for solving this issue. I have a few requirements that the solution must meet: 1) The key needs to remain a single integral data type. Converting all existing keys to a compound key or to a string or other type would affect the entire code base and likely result in more bugs than it's worth. 2) The solution can't depend on an Internet connection. A user must be able to run the app and add data even with no Internet connection. The data should still resolve properly later when the data syncs through iCloud once a connection is available. I'll accept one exception to this rule. If no other option is available, I may be open to requiring an Internet connection the first time the app's data is initialized. One idea I have been toying around with in my head is logically splitting the integer key into two parts. The high 4 or 5 bits could be used as some sort of device id while the rest represents the actual key. The fuzzy part is figuring out how to come up with non-conflicting device ids that fit in a few bits. This should be viable since I don't need to deal will millions of devices. I just need to deal with the few devices that would be shared by a given iCloud account. I'm open to suggestions. Thanks.

    Read the article

  • Are You Meeting Social Customer Service Expectations?

    - by Mike Stiles
    Whether it’s B2B or B2C, one sure path to repeat business is making sure your buyer has a memorably pleasant and successful customer service experience with you. If they get that kind of treatment consistently, that’s called a relationship. And those aren’t broken easily. Social customer service, driven by integrated SRM (social relationship management) technology, is the venue that can effectively connect customers not only to the brand, but to other customers. Positive experiences, once administered, don’t just rest with the recipient. They’re published in the form of public raves and peer-to-peer recommendation, a force far more actionable than push advertising. What’s more, your customers have come to expect access to you and satisfaction from you using social. An NM Incite study shows 83% of Twitter users and 71% of Facebook users expect to get an answer from brands the same day they post to them on their social assets. To make sure you’re responding, you’ve got to have a tech platform that’s set up to moderate and alert so you’ll know ASAP a customer needs help. The more integrated your social enterprise is, the faster you can not only respond, but respond with the answer they’re looking for, because your system is connected to the internal resources that can surface the answer or put wheels in motion to rectify the situation in the shortest amount of time possible. But if you go to the necessary lengths to make sure your customers feel valued and important, will they really reward you? The study says 71% of consumers who got quick and effective responses from companies they contacted via social were more likely to recommend the brand to their friends and followers. So yes, sweeping people off their feet pays big dividends in terms of word-of-mouth marketing. But you should be keenly aware of the reverse side of that coin. Give people a negative experience, either in real world or virtual customer service, and that message is highly likely to get amplified through social channels faster and louder. Only 36% of the NM Incite study’s respondents reported that their problems were solved quickly and effectively. 36%? That’s hardly an impressive number. It gets worse. 10% never got so much as a response - at all. Going back to the relationship analogy, companies that are this deep in the ditch where customer service is concerned are making their girl or boyfriends really easy for a competitor to steal. Given the technology tools and data available right now for having an intimate knowledge of the customer, what products they’ve purchased, likely problems with those products, effective resolutions to those problems, and follow-up communication to gauge satisfaction, there are fewer excuses than ever for making the lifeblood of your business feel like you couldn’t care less. @mikestiles

    Read the article

  • Trying to use stencils in 2D while retaining layer depth

    - by Steve
    This is a screen of what's going on just so you can get a frame of reference. http://i935.photobucket.com/albums/ad199/fobwashed/tilefloors.png The problem I'm running into is that my game is slowing down due to the amount of texture swapping I'm doing during my draw call. Since walls, characters, floors, and all objects are on their respective sprite sheet containing those types, per tile draw, the loaded texture is swapping no less than 3 to 5+ times as it cycles and draws the sprites in order to layer properly. Now, I've tried throwing all common objects together into their respective lists, and then using layerDepth drawing them that way which makes things a lot better, but the new problem I'm running into has to do with the way my doors/windows are drawn on walls. Namely, I was using stencils to clear out a block on the walls that are drawn in the shape of the door/window so that when the wall would draw, it would have a door/window sized hole in it. This is the way my draw was set up for walls when I was going tile by tile rather than grouped up common objects. first it would check to see if a door/window was on this wall. If not, it'd skip all the steps and just draw normally. Otherwise end the current spriteBatch Clear the buffers with a transparent color to preserve what was already drawn start a new spritebatch with stencil settings draw the door area end the spriteBatch start a new spritebatch that takes into account the previously set stencil draw the wall which will now be drawn with a hole in it end that spritebatch start a new spritebatch with the normal settings to continue drawing tiles In the tile by tile draw, clearing the depth/stencil buffers didn't matter since I wasn't using any layerDepth to organize what draws on top of what. Now that I'm drawing from lists of common objects rather than tile by tile, it has sped up my draw call considerably but I can't seem to figure out a way to keep the stencil system to mask out the area a door or window will be drawn into a wall. The root of the problem is that when I end a spriteBatch to change the DepthStencilState, it flattens the current RenderTarget and there is no longer any depth sorting for anything drawn further down the line. This means walls always get drawn on top of everything regardless of depth or positioning in the game world and even on top of each other as the stencil has to happen once for every wall that has a door or window. Does anyone know of a way to get around this? To boil it down, I need a way to draw having things sorted by layer depth while also being able to stencil/mask out portions of specific sprites.

    Read the article

  • Developing a Support Plan for Cloud Applications

    - by BuckWoody
    Last week I blogged about developing a High-Availability plan. The specifics of a given plan aren't as simple as "Step 1, then Step 2" because in a hybrid environment (which most of us have) the situation changes the requirements. There are those that look for simple "template" solutions, but unless you settle on a single vendor and a single way of doing things, that's not really viable. The same holds true for support. As I've mentioned before, I'm not fond of the term "cloud", and would rather use the tem "Distributed Computing". That being said, more people understand the former, so I'll just use that for now. What I mean by Distributed Computing is leveraging another system or setup to perform all or some of a computing function. If this definition holds true, then you're essentially creating a partnership with a vendor to run some of your IT - whether that be IaaS, PaaS or SaaS, or more often, a mix. In your on-premises systems, you're the first and sometimes only line of support. That changes when you bring in a Cloud vendor. For Windows Azure, we have plans for support that you can pay for if you like. http://www.windowsazure.com/en-us/support/plans/ You're not off the hook entirely, however. You still need to create a plan to support your users in their applications, especially for the parts you control. The last thing they want to hear is "That's vendor X's problem - you'll have to call them." I find that this is often the last thing the architects think about in a solution. It's fine to put off the support question prior to deployment, but I would hold off on calling it "production" until you have that plan in place. There are lots of examples, like this one: http://www.va-interactive.com/inbusiness/editorial/sales/ibt/customer.html some of which are technology-specific. Once again, this is an "it depends" kind of approach. While it would be nice if there was just something in a box we could buy, it just doesn't work that way in a hybrid system. You have to know your options and apply them appropriately.

    Read the article

  • How do I create weapon attachments?

    - by Tron86
    My question is; I am developing a game for XNA and I am trying to create a weapon attachment for my player model. My player model loads the .md3 format and reads tags for attachment points. I am able to get the tag of my model's hand. And I am also able to get the tag of my weapon's handle. Each tag I am able to get the rotation and position of and this is how I am calculating it: Model.worldMatrix = Matrix.CreateScale(Model.scale) * Matrix.CreateRotationX(-MathHelper.PiOver2) * Matrix.CreateRotationY(MathHelper.PiOver2); Pretty simple, the player model has a scale and its orientation(it loads on its side so I just use a 90 degree X axis rotation, and a Y axis rotation to face away from the camera). I then calculate the torso tag on the lower body, which gives me a local coordinate at the waist. Then I take that matrix and calculate the tag_weapon in the upper body. This gives me the hand position in local space. I also get the rotation matrix from that tag that I store for later use. All this seems to work fine. Now I move onto my weapon: Matrix weaponWorld = Matrix.CreateScale(CurrentWeapon.scale) * Matrix.CreateRotationX(-MathHelper.PiOver2) * TagRotationMatrix * Matrix.CreateTranslation(HandTag.Position) * Matrix.CreateRotationY(PlayerRotation) * Matrix.CreateTranslation(CollisionBody.Position) * You may notice the weapon matrix gets rotated by 90 degress on the X axis as well. This is because they load in on their sides. Once again this seems pretty simple and follows the SRT order I keep reading about. My TagRotation matrix is the hand's rotation. HandTag.Position is its position in local space. CreateRotationY(PlayerRotation) is the player's rotation in world space, and the CollisionBody.Position is the player's world location. Everything seems to be in order, and almost works in game. However when the gun spawns and follows the player's hand it seems to be flipped on an axis every couple frames. Almost like the X or Y axis is being inversed then put right back. Its hard to explain and I am totally stumped. Even removing all my X axis fixes does nothing to solve the problem. Hopefully I explained everything enough as I am a bit new to this! Thanks!

    Read the article

  • Developing a Support Plan for Cloud Applications

    - by BuckWoody
    Last week I blogged about developing a High-Availability plan. The specifics of a given plan aren't as simple as "Step 1, then Step 2" because in a hybrid environment (which most of us have) the situation changes the requirements. There are those that look for simple "template" solutions, but unless you settle on a single vendor and a single way of doing things, that's not really viable. The same holds true for support. As I've mentioned before, I'm not fond of the term "cloud", and would rather use the tem "Distributed Computing". That being said, more people understand the former, so I'll just use that for now. What I mean by Distributed Computing is leveraging another system or setup to perform all or some of a computing function. If this definition holds true, then you're essentially creating a partnership with a vendor to run some of your IT - whether that be IaaS, PaaS or SaaS, or more often, a mix. In your on-premises systems, you're the first and sometimes only line of support. That changes when you bring in a Cloud vendor. For Windows Azure, we have plans for support that you can pay for if you like. http://www.windowsazure.com/en-us/support/plans/ You're not off the hook entirely, however. You still need to create a plan to support your users in their applications, especially for the parts you control. The last thing they want to hear is "That's vendor X's problem - you'll have to call them." I find that this is often the last thing the architects think about in a solution. It's fine to put off the support question prior to deployment, but I would hold off on calling it "production" until you have that plan in place. There are lots of examples, like this one: http://www.va-interactive.com/inbusiness/editorial/sales/ibt/customer.html some of which are technology-specific. Once again, this is an "it depends" kind of approach. While it would be nice if there was just something in a box we could buy, it just doesn't work that way in a hybrid system. You have to know your options and apply them appropriately.

    Read the article

  • jQuery slide open and close menus. How to stop them going crazy? [closed]

    - by firefusion
    I want sub menu's of a verticle menu to expand and collapse when moused over. This is what i've got so far but it goes crazy if you do it to quick as all the animations run at once and on a delay. How can i make sure just one menu expands at a time. I've also set the current_page_item to be open but default and I don't want this to expand or collaspe. <ul> <li class="current_page_item"><a href="#">Parent Item</a> <ul class="children"> <li class="page_item"><a href="#">Child page</a></li> <li class="page_item"><a href="#">Child page</a></li> <li class="page_item"><a href="#">Child page</a></li> <li class="page_item"><a href="#">Child page</a></li> </ul> </li> <li class="page_item"><a href="#">Parent Item</a> <ul class="children"> <li class="page_item"><a href="#">Child page</a></li> <li class="page_item"><a href="#">Child page</a></li> </ul> </li> <li class="page_item"><a href="#">Parent Item</a> <ul class="children"> <li class="page_item"><a href="#">Child page</a></li> <li class="page_item"><a href="#">Child page</a></li> </ul> </li> <li class="page_item"><a href="#">Parent Item</a></li> <li class="page_item"><a href="#">Parent Item</a></li> </ul> jQuery('ul.children').hide(); jQuery('li.current_page_item ul.children').show(); jQuery('li.current_page_item').parent().show(); jQuery("li.page_item").hover(function() { jQuery(this).find('ul.children').delay(300).slideDown('slow'); }, function() { jQuery(this).find('ul.children').delay(300).slideUp('slow'); });

    Read the article

  • How to have an Arduino wait until it receives data over serial?

    - by SonicDH
    So I've wired up a little robot with a sound shield and some sensors. I'm trying to write a sketch that will let check the sensors. What I'd like for it to do is print out a little menu over serial, wait until the user sends a selection, jump to the function that matches their selection, then (once the function is done) jump back and print the menu again. Here's what I've written, but I'm not a that good of a coder, so it doesn't work. Where am I going wrong? #include <Servo.h> Servo steering; Servo throttle; int pos = 0; int val = 0; void setup(){   Serial.begin(9600);   throttle.write(90);   steering.write(90);   pinMode(A0, INPUT);   pinMode(7, INPUT);   char ch = 0; } void loop(){   Serial.println("Menu");   Serial.println("--------------------");   Serial.println("1. Motion Readout");   Serial.println("2. Distance Readout");   Serial.println("3. SD Directory Listing");   Serial.println("4. Sound Test");   Serial.println("5. Car Test");   Serial.println("--------------------");   Serial.println("Type the number and press enter");   while(char ch = 0){   ch = Serial.read();}   char ch;   switch(ch)   {     case '1':     motion();   }    ch = 0; } //menu over, lets get to work. void motion(){   Serial.println("Haha, it works!"); } I'm pretty sure a While loop is the right thing to do, but I'm probably implementing it wrong. Can anyone shed some light on this?

    Read the article

  • Crash when using datablocks

    - by scorcher24
    I have really throughly searched the net and could not find any solution for this so I ask for help here. Anyway, I have this datablock in datablocks.cs: datablock t2dSceneObjectDatablock(EnemyShipConfig) { canSaveDynamicFields = "1"; Layer = "3"; size = "64 64"; CollisionActiveSend = "1"; CollisionActiveReceive = "1"; CollisionCallback = true; CollisionLayers = "3"; CollisionDetectionMode = "POLYGON"; CollisionPolyList = "0.00 -0.791 0.756 0.751 -0.746 0.732"; UsesPhysics = "0"; Rotation = "-90"; WorldLimitMode = "KILL"; WorldLimitMax = "880 360"; WorldLimitMin = "-765 -436"; minFireRate = "2000"; maxFireRate = "1200"; laserSpeed = "800"; minSpeed = "100"; maxSpeed = "150"; }; This is an exact reproduction of an object that I have manually edited in the editor. So far, I just used clone() to get as many enemies as I need, while it was out of sight. It is a r-type style shooter, so I need a variable amount of enemies. Since clone() spams my log, I decided to use datablocks, since it is also more flexible. That's what I get when I use clone(): Con::execute - 0 has no namespace: onRemoveFromScene However, once spawning begins, my game freezes and crashes: function SpawnEnemy() { //%spawnedEnemy = EnemyShipMaster.clone(true); %spawnedEnemy = new t2dStaticSprite() { class = "EnemyShip"; sceneGraph = $global_sceneGraph; datablock = "EnemyShipConfig"; imageMap = "starshipImageMap"; layer = 3; }; %speed = getRandom(%spawnedEnemy.minSpeed, %spawnedEnemy.maxSpeed); %y = getRandom(-320, 320); // Set Properties %spawnedEnemy.setPositionX(700); %spawnedEnemy.setPositionY(%y); %spawnedEnemy.setVisible(true); %spawnedEnemy.setLinearVelocityX( -%speed ); %spawnedEnemy.setTimerOn( getRandom( %spawnedEnemy.maxFireRate, %spawnedEnemy.minFireRate ) ); } // Definition of $global_sceneGraph from game.cs: $global_sceneGraph = sceneWindow2D.loadLevel(%level); As I said, it works fine when I use clone() (which is commented out here), but my log gets spammed. I really hope someone can shed some light for me, this is driving me crazy.

    Read the article

  • Play Framework Plugin for NetBeans IDE

    - by Geertjan
    The start of minimal support for the Play Framework in NetBeans IDE 7.3 Beta would constitute (1) recognizing Play projects, (2) an action to run a Play project, and (3) classpath support. Well, most of that I've created already, as can be seen, e.g., below you can see logical views in the Projects window for Play projects (i.e., I can open all the samples that come with the Play distribution). Right-clicking a Play project lets you run it and, if the embedded browser is selected in the Options window, you can see the result in the IDE. Make a change to your code and refresh the browser, which immediately shows you your changes: What needs to be done, among other things: A wizard for creating new Play projects, i.e., it would use the Play command line to create the application and then open it in the IDE. Integration of everything available on the Play command line. Maybe the logical view, i.e., what is shown in the Projects window, should be changed. Right now, only the folders "app" and "test" are shown there, with everything else accessible in the Files window, as can be seen in the screenshot above. More work on the classpath, i.e., I've hardcoded a few things just to get things to work correctly. Options window extension to register the Play executable, instead of the current hardcoded solution. Scala integrations, i.e., investigate if/how the NetBeans Scala plugin is helpful and, if not, create different/additional solutions. E.g., the HTML templates are partly in Scala, i.e., need to embed Scala support into HTML. Hyperlinking in the "routes" file, as well as special support for the "application.conf" file. Anyone interested, especially if you're a Play fan (a "playboy"?), in joining me in working on this NetBeans plugin? I'll be uploading the sources to a java.net repository soon. It will be here, once it has been made publicly accessible: http://java.net/projects/nbplay/sources/nbplay Kind of cool detail is that the NetBeans plugin is based on Maven, which means that you could use any Maven-supporting IDE to work on this plugin.

    Read the article

  • Rebuilding a Mac Mini (early 2009)

    - by Kelly Jones
    This weekend I decided to rebuild the family’s Mac Mini.  It’s the early 2009 model and I hadn’t done it since we got it in March of 2009.  Even worse, I had done the import data step (or whatever Apple calls it) which brought over all of the data files and apps from our previous Mac.  AND that install goes back to before 2005, as far as I can remember.  SO, to say that “cruft” had built up in the operating system, is probably a bit of an understatement. The rebuild went pretty smoothly, especially since I had a couple of spare hard drives.  I hooked up a spare USB drive and formatted it for use with the Mac.  I then used Carbon Copy to clone the internal hard drive onto the USB drive.  (Carbon Copy is a great little app that I used several years ago and I was happy to see it was not only still around, but updated as well.) Once I had my backup, I shut down the Mac and replaced the internal hard drive.  I had purchased the hard drive last fall to use with my work laptop, but I got a new work laptop (with awesome dual SSDs) so I wasn’t using it anymore.  The replacement drive (Seagate Momentus 7200.4 ST9500420AS 500GB 7200 RPM 2.5" SATA 3.0Gb/s Internal Notebook Hard Drive) has more than double the original’s capacity and is also faster.  I’ll have to keep an eye on the temperature, since that 7200 drive will run hotter. Opening the Mac Mini is not for the easily intimidated!  That cool little case is quite the pain to open.  Luckily, OWC put a video together here.  After replacing the drive, I then installed a clean copy of OS 10.5 using the DVDs that came with the Mac.  After the OS, it was time to reinstall the apps.  I downloaded some of the freeware, just to make sure I had the latest versions.  For the rest, I just copied from the backup cloned drive to the new drive.  (I love the way most Mac apps are written – with almost everything contained within a “package” that I can just copy from one drive to another.  MUCH better than the Windows way of using shared DLLs and the registry to store critical pieces that the app needs in order to run!) The whole process took longer than I would have preferred, but it was long overdue.  It definitely “feels” faster, especially boot time and application launches.

    Read the article

  • Advantages to Multiple Methods over Switch

    - by tandu
    I received a code review from a senior developer today asking "By the way, what is your objection to dispatching functions by way of a switch statement?" I have read in many places about how pumping an argument through switch to call methods is bad OOP, not as extensible, etc. However, I can't really come up with a definitive answer for him. I would like to settle this for myself once and for all. Here are our competing code suggestions (php used as an example, but can apply more universally): class Switch { public function go($arg) { switch ($arg) { case "one": echo "one\n"; break; case "two": echo "two\n"; break; case "three": echo "three\n"; break; default: throw new Exception("Unknown call: $arg"); break; } } } class Oop { public function go_one() { echo "one\n"; } public function go_two() { echo "two\n"; } public function go_three() { echo "three\n"; } public function __call($_, $__) { throw new Exception("Unknown call $_ with arguments: " . print_r($__, true)); } } Part of his argument was "It (switch method) has a much cleaner way of handling default cases than what you have in the generic __call() magic method." I disagree about the cleanliness and in fact prefer call, but I would like to hear what others have to say. Arguments I can come up with in support of Oop scheme: A bit cleaner in terms of the code you have to write (less, easier to read, less keywords to consider) Not all actions delegated to a single method. Not much difference in execution here, but at least the text is more compartmentalized. In the same vein, another method can be added anywhere in the class instead of a specific spot. Methods are namespaced, which is nice. Does not apply here, but consider a case where Switch::go() operated on a member rather than a parameter. You would have to change the member first, then call the method. For Oop you can call the methods independently at any time. Arguments I can come up with in support of Switch scheme: For the sake of argument, cleaner method of dealing with a default (unknown) request Seems less magical, which might make unfamiliar developers feel more comfortable Anyone have anything to add for either side? I'd like to have a good answer for him.

    Read the article

  • [EF + Oracle] Inserting Data (1/2)

    - by JTorrecilla
    Prologue Following EF series (I ,II y III) in this chapter we will see how to create DB record from EF. Inserting Data Like we indicated in the 2º post: “One Entity matches with a DB record, and one property match with a Table Column”. To start, we need to create an object from one of the Entities: 1: EMPLEADOS empleado = new EMPLEADOS(); Also like, I told previously, Exists the possibility to use the Static Function defined by VS for each Entity: Once we have created the object, we can Access to it properties to fill like a common class:   1: empleado.NOMBRE = "Javier Torrecilla";   After finish of fill our Entity properties, it must be needed to add the object to the appropriate ObjectSet in the ObjectContext: 1: enti.EMPLEADOS.AddObject(empleado); or 1: enti.AddToEMPLEADOS(empleado); Both methods will do the same action, create an insert statement. Have we finished? No. Any Entity has a property called “EntityState”. This prop is an Enum from “EntityState”, which has the following: Detached: the Entity is created, but not added to the Context. Unchanged: There is no pending changes in the Entity. Added: The entity is added to the ObjectSet, but it is not yet sent to the DB. Deleted: The object is deleted form the ObjectSet, but not yet from the DB. Modified: There is Pending Changes to confirm. Let’s see, the several values of the property during the Creation steps: 1. While the Object is created and we are filling the props: EntityState.Detached; 2. After adding to the ObjectSet: EntityState.Added. This not indicated that the record is in the DB 3. Saving the Data: To sabe the data in the DB, we are going to call “SaveChanges” method of the Object Context. After invoke it, the property will be EntityState.Unchanged.   What does SaveChanges Method? This function will synchronize and send all pending changes to DB. It will add, modify or delete all Entities, whose EntityState property, is setted to Added, Deleted or Modified. After finishing, all added or modified entities will be change the State to “Unchanged”, and deleted Entities must take the “Detached” state.

    Read the article

< Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >