Search Results

Search found 8161 results on 327 pages for 'explicit loading'.

Page 257/327 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • Incentivizing Work with Development Teams

    - by MarkPearl
    Recently I saw someone on twitter asking about incentives and if anyone had past experience with incentivizing work. I promised to respond with some of the experiences I have had in the past so here goes... **Disclaimer** - these are my experiences with incentives, generally in software development - in some other industries this may not be applicable – this is also my thinking at this point in time, with more experience my opinion may change. Incentivize at the level that you want people to group at If you are wanting to promote a team mentality, incentivize teams. If you want to promote an individual mentality, incentivize individuals. There is nothing worse than mixing this up. Some organizations put a lot of effort in establishing teams and team mentalities but reward individuals. This has a counter effect on the resources they have put towards establishing a team mentality. In the software projects that I work with we want promote cross functional teams that collaborate. Personally, if I was on a team and knew that there was an opportunity to work on a critical component of the system, and that by doing so I would get a bigger bonus, then I would be hesitant to include other people in solving that problem. Thus, I would hinder the teams efforts in being cross functional and reduce collaboration levels. Does that mean everyone in the team should get an even share of an incentive? In most situations I would say yes - even though this may feel counter-intuitive. I have heard arguments put forward that if “person x contributed more than person Y then they should be rewarded more” – This may sound controversial but I would rather treat people how would you like them to perform, not where they currently are at. To add to this approach, if someone is free loading, you bet your bottom dollar that the team is going to make this a lot more transparent if they feel that individual is going to be rewarded at the same level that everyone else is. Bad incentives promote destructive work If you are going to incentivize people, pick you incentives very carefully. I had an experience once with a sales person who was told they would get a bonus provided that they met an ordering target with a particular supplier. What did this person do? They sold everything at cost for the next month or so. They reached the goal, but the company didn't gain anything from it. It was a bad incentive. Expect the same with development teams, if you incentivize zero bug levels, you will get zero code committed to the solution. If you incentivize lines of code, you will get many many lines of bad code. Is there such a thing as a good incentives? Monetary wise, I am not sure there is. I would much rather encourage organizations to pay their people what they are worth upfront. I would also advise against paying money to teams as an incentive or even a bonus or reward for reaching a milestone. Rather have a breakaway for the team that promotes team building as a reward if they reach a milestone than pay them more money. I would also advise against making the incentive the reason for them to reach the milestone. If this becomes the norm it promotes people to begin to only do their job if there is an incentive at the end of the line. This is not a behaviour one wants to encourage. If the team or individual is in the right mind-set, they should not work any harder than they are right now with normal pay.

    Read the article

  • Where should instantiated classes be stored?

    - by Eric C.
    I'm having a bit of a design dilemma here. I'm writing a library that consists of a bunch of template classes that are designed to be used as a base for creating content. For example: public class Template { public string Name {get; set;} public string Description {get; set;} public string Attribute1 {get; set;} public string Attribute2 {get; set;} public Template() { //constructor } public void DoSomething() { //does something } ... } The problem is, not only is the library providing the templates, it will also supply quite a few predefined templates which are instances of these template classes. The question is, where do I put these instances of the templates? The three solutions I've come up with so far are: 1) Provide serialized instances of the templates as files. On the one hand, this solution would keep the instances separated from the library itself, which is nice, but it would also potentially add complexity for the user. Even if we provided methods for loading/deserializing the files, they'd still have to deal with a bunch of files, and some kind of config file so the app knows where to look for those files. Plus, creating the template files would probably require a separate app, so if the user wanted to stick with the files method of storing templates, we'd have to provide some kind of app for creating the template files. Also, this requires external dependencies for testing the templates in the user's code. 2) Add readonly instances to the template class Example: public class Template { public string Name {get; set;} public string Description {get; set;} public string Attribute1 {get; set;} public string Attribute2 {get; set;} public Template PredefinedTemplate { get { Template templateInstance = new Template(); templateInstance.Name = "Some Name"; templateInstance.Description = "A description"; ... return templateInstance; } } public Template() { //constructor } public void DoSomething() { //does something } ... } This method would be convenient for users, as they would be able to access the predefined templates in code directly, and would be able to unit test code that used them. The drawback here is that the predefined templates pollute the Template type namespace with a bunch of extra stuff. I suppose I could put the predefined templates in a different namespace to get around this drawback. The only other problem with this approach is that I'd have to basically duplicate all the namespaces in the library in the predefined namespace (e.g. Templates.SubTemplates and Predefined.Templates.SubTemplates) which would be a pain, and would also make refactoring more difficult. 3) Make the templates abstract classes and make the predefined templates inherit from those classes. For example: public abstract class Template { public string Name {get; set;} public string Description {get; set;} public string Attribute1 {get; set;} public string Attribute2 {get; set;} public Template() { //constructor } public void DoSomething() { //does something } ... } and public class PredefinedTemplate : Template { public PredefinedTemplate() { this.Name = "Some Name"; this.Description = "A description"; this.Attribute1 = "Some Value"; ... } } This solution is pretty similar to #2, but it ends up creating a lot of classes that don't really do anything (none of our predefined templates are currently overriding behavior), and don't have any methods, so I'm not sure how good a practice this is. Has anyone else had any experience with something like this? Is there a best practice of some kind, or a different/better approach that I haven't thought of? I'm kind of banging my head against a wall trying to figure out the best way to go. Thanks!

    Read the article

  • Ubuntu 12.04 Hp G72 Problem Installing proprietary wireless driver

    - by user69402
    I have a fresh Ubuntu 12.04 installed on HP G72 machine. In order for my wireless to work I need the proprietary driver installed - Broadcom STA wireless driver. Trying to install it from the System Settings gives me the error: "Sorry, installation of this driver failed. Please have a look at the log file for details: /var/log/jockey.log". So far I suspect the error to be caused by the bad "bcmwl-kernel-source" installation. What i tried: 1. remove "bcmwl-kernel-source" 2. install "bcmwl-kernel-source" installation through the terminal ends with "error code (1)". I would greatly appreciate any help Here is everything that the terminal returns: Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: bcmwl-kernel-source 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/1,151 kB of archives. After this operation, 3,514 kB of additional disk space will be used. Selecting previously unselected package bcmwl-kernel-source. (Reading database ... 170331 files and directories currently installed.) Unpacking bcmwl-kernel-source (from .../bcmwl-kernel-source_5.100.82.38+bdcom-0ubuntu6.1_amd64.deb) ... Setting up bcmwl-kernel-source (5.100.82.38+bdcom-0ubuntu6.1) ... Loading new bcmwl-5.100.82.38+bdcom DKMS files... /usr/sbin/dkms: line 467: unset: POST_REMOVE$PRE_BUMLD': not a valid identifier /usr/sbin/dkms: line 467: unset:BUILD_E\CLUWIVE_ARCH': not a valid identifier /usr/sbin/dkms: line 467: unset: $': not a valid identifier /usr/sbin/dkms: line 467: unset:$': not a valid identifier /usr/sbin/dkms: line 467: unset: modules_conf_arra}': not a valid identifier /usr/sbin/dkms: line 467: unset:$': not a valid identifier /usr/sbin/dkms: line 467: unset: $': not a valid identifier /usr/sbin/dkms: line 467: unset:$': not a valid identifier /usr/sbin/dkms: line 467: unset: $': not a valid identifier /usr/sbin/dkms: line 467: unset:$': not a valid identifier /usr/sbin/dkms: line 467: unset: `$': not a valid identifier /usr/sbin/dkms: line 419: ${!POST_REMOVE$PRE_BUMLD[@]}: bad substitution /usr/sbin/dkms: line 419: ${!BUILD_E\CLUWIVE_ARCH[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution malloc: ../bash/subst.c:3671: assertion botched free: start and end chunk sizes differ Aborting.../tmp/tmp.pEXTnftUfI: line 4: modules_conf_arra}[[@]}]=[[@]}]}: command not found dkms.conf: Error! No 'DEST_MODULE_LOCATION' directive specified for record #0. dkms.conf: Error! Directive 'DEST_MODULE_LOCATION' does not begin with '/kernel', '/updates', or '/extra' in record #0. dkms.conf: Error! No 'PACKAGE_VERSION' directive specified. Error! Bad conf file. File: /usr/src/bcmwl-5.100.82.38+bdcom/dkms.conf does not represent a valid dkms.conf file. dpkg: error processing bcmwl-kernel-source (--configure): subprocess installed post-installation script returned error exit status 8 Errors were encountered while processing: bcmwl-kernel-source E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • linux raid 1: right after replacing and syncing one drive, the other disk fails - understanding what is going on with mdstat/mdadm

    - by devicerandom
    We have an old RAID 1 Linux server (Ubuntu Lucid 10.04), with four partitions. A few days ago /dev/sdb failed, and today we noticed /dev/sda had pre-failure ominous SMART signs (~4000 reallocated sector count). We replaced /dev/sdb this morning and rebuilt the RAID on the new drive, following this guide: http://www.howtoforge.com/replacing_hard_disks_in_a_raid1_array Everything went smooth until the very end. When it looked like it was finishing to synchronize the last partition, the other old one failed. At this point I am very unsure of the state of the system. Everything seems working and the files seem to be all accessible, just as if it synchronized everything, but I'm new to RAID and I'm worried about what is going on. The /proc/mdstat output is: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md3 : active raid1 sdb4[2](S) sda4[0] 478713792 blocks [2/1] [U_] md2 : active raid1 sdb3[1] sda3[2](F) 244140992 blocks [2/1] [_U] md1 : active raid1 sdb2[1] sda2[2](F) 244140992 blocks [2/1] [_U] md0 : active raid1 sdb1[1] sda1[2](F) 9764800 blocks [2/1] [_U] unused devices: <none> The order of [_U] vs [U_]. Why aren't they consistent along all the array? Is the first U /dev/sda or /dev/sdb? (I tried looking on the web for this trivial information but I found no explicit indication) If I read correctly for md0, [_U] should be /dev/sda1 (down) and /dev/sdb1 (up). But if /dev/sda has failed, how can it be the opposite for md3 ? I understand /dev/sdb4 is now spare because probably it failed to synchronize it 100%, but why does it show /dev/sda4 as up? Shouldn't it be [__]? Or [_U] anyway? The /dev/sda drive now cannot even be accessed by SMART anymore apparently, so I wouldn't expect it to be up. What is wrong with my interpretation of the output? I attach also the outputs of mdadm --detail for the four partitions: /dev/md0: Version : 00.90 Creation Time : Fri Jan 21 18:43:07 2011 Raid Level : raid1 Array Size : 9764800 (9.31 GiB 10.00 GB) Used Dev Size : 9764800 (9.31 GiB 10.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:27:33 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : a3b4dbbd:859bf7f2:bde36644:fcef85e2 Events : 0.7704 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 17 1 active sync /dev/sdb1 2 8 1 - faulty spare /dev/sda1 /dev/md1: Version : 00.90 Creation Time : Fri Jan 21 18:43:15 2011 Raid Level : raid1 Array Size : 244140992 (232.83 GiB 250.00 GB) Used Dev Size : 244140992 (232.83 GiB 250.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:39:06 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : 8bcd5765:90dc93d5:cc70849c:224ced45 Events : 0.1508280 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 18 1 active sync /dev/sdb2 2 8 2 - faulty spare /dev/sda2 /dev/md2: Version : 00.90 Creation Time : Fri Jan 21 18:43:19 2011 Raid Level : raid1 Array Size : 244140992 (232.83 GiB 250.00 GB) Used Dev Size : 244140992 (232.83 GiB 250.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:46:44 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : 2885668b:881cafed:b8275ae8:16bc7171 Events : 0.2289636 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 19 1 active sync /dev/sdb3 2 8 3 - faulty spare /dev/sda3 /dev/md3: Version : 00.90 Creation Time : Fri Jan 21 18:43:22 2011 Raid Level : raid1 Array Size : 478713792 (456.54 GiB 490.20 GB) Used Dev Size : 478713792 (456.54 GiB 490.20 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 3 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:19:20 2013 State : clean, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 1 0 0 1 removed 2 8 20 - spare /dev/sdb4 The active sync on /dev/sda4 baffles me. I am worried because if tomorrow morning I have to replace /dev/sda, I want to be sure what should I sync with what and what is going on. I am also quite baffled by the fact /dev/sda decided to fail exactly when the raid finished resyncing. I'd like to understand what is really happening. Thanks a lot for your patience and help. Massimo

    Read the article

  • Memory is full with vertex buffer

    - by Christian Frantz
    I'm having a pretty strange problem that I didn't think I'd run into. I was able to store a 50x50 grid in one vertex buffer finally, in hopes of better performance. Before I had each cube have an individual vertex buffer and with 4 50x50 grids, this slowed down my game tremendously. But it still ran. With 4 50x50 grids with my new code, that's only 4 vertex buffers. With the 4 vertex buffers, I get a memory error. When I load the game with 1 grid, it takes forever to load and with my previous version, it started up right away. So I don't know if I'm storing chunks wrong or what but it stumped me -.- for (int x = 0; x < 50; x++) { for (int z = 0; z < 50; z++) { for (int y = 0; y <= map[x, z]; y++) { SetUpVertices(); SetUpIndices(); cubes.Add(new Cube(device, new Vector3(x, map[x, z] - y, z), grass)); } } } vertexBuffer = new VertexBuffer(device, typeof(VertexPositionTexture), vertices.Count(), BufferUsage.WriteOnly); vertexBuffer.SetData<VertexPositionTexture>(vertices.ToArray()); indexBuffer = new IndexBuffer(device, typeof(short), indices.Count(), BufferUsage.WriteOnly); indexBuffer.SetData(indices.ToArray()); Thats how theyre stored. The array I'm reading from is a byte array which defines the coordinates of my map. Now with my old version, I used the same loading from an array so that hasn't changed. The only difference is the one vertex buffer instead of 2500 for a 50x50 grid. cubes is just a normal list that holds all my cubes for the vertex buffer. Another thing that just came to mind would be my draw calls. If I'm setting an effect for each cube in my cube list, that's probably going to take a lot of memory. How can I avoid doing this? I need the foreach method to set my cubes to the right position foreach (Cube block in cube.cubes) { effect.VertexColorEnabled = false; effect.TextureEnabled = true; Matrix center = Matrix.CreateTranslation(new Vector3(-0.5f, -0.5f, -0.5f)); Matrix scale = Matrix.CreateScale(1f); Matrix translate = Matrix.CreateTranslation(block.cubePosition); effect.World = center * scale * translate; effect.View = cam.view; effect.Projection = cam.proj; effect.FogEnabled = false; effect.FogColor = Color.CornflowerBlue.ToVector3(); effect.FogStart = 1.0f; effect.FogEnd = 50.0f; cube.Draw(effect); noc++; }

    Read the article

  • Scripting Windows Shares - VBS

    - by Calvin Piche
    So i am totally new to VBS, never used it. I am trying to create multiple shares and i found a Microsoft VBS script that can do this(http://gallery.technet.microsoft.com/scriptcenter/6309d93b-fcc3-4586-b102-a71415244712) My question is, this script only allows for one domain group or user to be added for permissions where i am needing to add a couple with different permissions(got that figured out) Below is the script that i have modified for my needs but just need to add in the second group with the other permissions. If there is an easier way to do this please let me know. 'ShareSetup.vbs '========================================================================== Option Explicit Const FILE_SHARE = 0 Const MAXIMUM_CONNECTIONS = 25 Dim strComputer Dim objWMIService Dim objNewShare strComputer = "." Set objWMIService = GetObject("winmgmts:" & "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2") Set objNewShare = objWMIService.Get("Win32_Share") Call sharesec ("C:\Published Apps\Logs01", "Logs01", "Log01", "Support") Call sharesec2 ("C:\Published Apps\Logs01", "Logs01", "Log01", "Domain Admins") Sub sharesec(Fname,shr,info,account) 'Fname = Folder path, shr = Share name, info = Share Description, account = account or group you are assigning share permissions to Dim FSO Dim Services Dim SecDescClass Dim SecDesc Dim Trustee Dim ACE Dim Share Dim InParam Dim Network Dim FolderName Dim AdminServer Dim ShareName FolderName = Fname AdminServer = "\\" & strComputer ShareName = shr Set Services = GetObject("WINMGMTS:{impersonationLevel=impersonate,(Security)}!" & AdminServer & "\ROOT\CIMV2") Set SecDescClass = Services.Get("Win32_SecurityDescriptor") Set SecDesc = SecDescClass.SpawnInstance_() 'Set Trustee = Services.Get("Win32_Trustee").SpawnInstance_ 'Trustee.Domain = Null 'Trustee.Name = "EVERYONE" 'Trustee.Properties_.Item("SID") = Array(1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0) Set Trustee = SetGroupTrustee("domain", account) 'Replace ACME with your domain name. 'To assign permissions to individual accounts use SetAccountTrustee rather than SetGroupTrustee Set ACE = Services.Get("Win32_Ace").SpawnInstance_ ACE.Properties_.Item("AccessMask") = 1179817 ACE.Properties_.Item("AceFlags") = 3 ACE.Properties_.Item("AceType") = 0 ACE.Properties_.Item("Trustee") = Trustee SecDesc.Properties_.Item("DACL") = Array(ACE) Set Share = Services.Get("Win32_Share") Set InParam = Share.Methods_("Create").InParameters.SpawnInstance_() InParam.Properties_.Item("Access") = SecDesc InParam.Properties_.Item("Description") = "Public Share" InParam.Properties_.Item("Name") = ShareName InParam.Properties_.Item("Path") = FolderName InParam.Properties_.Item("Type") = 0 Share.ExecMethod_ "Create", InParam End Sub Sub sharesec2(Fname,shr,info,account) 'Fname = Folder path, shr = Share name, info = Share Description, account = account or group you are assigning share permissions to Dim FSO Dim Services Dim SecDescClass Dim SecDesc Dim Trustee Dim ACE2 Dim Share Dim InParam Dim Network Dim FolderName Dim AdminServer Dim ShareName FolderName = Fname AdminServer = "\\" & strComputer ShareName = shr Set Services = GetObject("WINMGMTS:{impersonationLevel=impersonate,(Security)}!" & AdminServer & "\ROOT\CIMV2") Set SecDescClass = Services.Get("Win32_SecurityDescriptor") Set SecDesc = SecDescClass.SpawnInstance_() 'Set Trustee = Services.Get("Win32_Trustee").SpawnInstance_ 'Trustee.Domain = Null 'Trustee.Name = "EVERYONE" 'Trustee.Properties_.Item("SID") = Array(1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0) Set Trustee = SetGroupTrustee("domain", account) 'Replace ACME with your domain name. 'To assign permissions to individual accounts use SetAccountTrustee rather than SetGroupTrustee Set ACE2 = Services.Get("Win32_Ace").SpawnInstance_ ACE2.Properties_.Item("AccessMask") = 1179817 ACE2.Properties_.Item("AceFlags") = 3 ACE2.Properties_.Item("AceType") = 0 ACE2.Properties_.Item("Trustee") = Trustee SecDesc.Properties_.Item("DACL") = Array(ACE2) End Sub Function SetAccountTrustee(strDomain, strName) set objTrustee = getObject("Winmgmts: {impersonationlevel=impersonate}!root/cimv2:Win32_Trustee").Spawninstance_ set account = getObject("Winmgmts: {impersonationlevel=impersonate}!root/cimv2:Win32_Account.Name='" & strName & "',Domain='" & strDomain &"'") set accountSID = getObject("Winmgmts: {impersonationlevel=impersonate}!root/cimv2:Win32_SID.SID='" & account.SID &"'") objTrustee.Domain = strDomain objTrustee.Name = strName objTrustee.Properties_.item("SID") = accountSID.BinaryRepresentation set accountSID = nothing set account = nothing set SetAccountTrustee = objTrustee End Function Function SetGroupTrustee(strDomain, strName) Dim objTrustee Dim account Dim accountSID set objTrustee = getObject("Winmgmts: {impersonationlevel=impersonate}!root/cimv2:Win32_Trustee").Spawninstance_ set account = getObject("Winmgmts:{impersonationlevel=impersonate}!root/cimv2:Win32_Group.Name='" & strName & "',Domain='" & strDomain &"'") set accountSID = getObject("Winmgmts: {impersonationlevel=impersonate}!root/cimv2:Win32_SID.SID='" & account.SID &"'") objTrustee.Domain = strDomain objTrustee.Name = strName objTrustee.Properties_.item("SID") = accountSID.BinaryRepresentation set accountSID = nothing set account = nothing set SetGroupTrustee = objTrustee End Function

    Read the article

  • Asus X202e VivoBook, dual boot. How to get around UEFI and have Win8 & Ubuntu?

    - by Nukeface
    I've gotten my hands on an Asus Vivobook X202e. I like it, handy to use, small, etc etc. Oh, it's the i3 core version. For school I still need Windows * sigh * for the .NET development. (I know, possible in Ubuntu, this n that, but for ease atm wanting to keep it with Win8). So. How to install both on this little thing? I've found a way into the BIOS (before splash screen, mash F2. Works only after reboot, not cold boot). But the whole boot loading setup is different than from what I know, and I must've messed up something because it's been "Attempting Repairs", "Analyzing hard disk", and a bunch of other things for the past 15 minutes. (All I've done is selected "disabled" on secure boot, picky as ** Microsoft). Keeping the original Windows installation is of no concern. Found the product key already and have a clean install waiting. BTW, not trying to leech knowledge, even though first question and no answers. I'm more and more active on Stackoverflow. But, especially due to secure boot and windows 8, I'm going over to Ubuntu. Well, more and more anyway, I like my Windows based games as well ;) UPDATE Managed to do a clean install of Windows 8 Pro. After disabling Secure Boot, also had to disable fast boot, and enable Launch CSM, leaving the option which appeared (Launch PXE OpROM) disabled. Then I rebooted, with the USB Boot drive I created using the Windows 7 USB DVD Download Tool (scroll down for download link), provided by Microsoft. During the installation, I chose to install a clean version, therefor deleted the partitions containing current windows files. I left the Recovery partition (you never know...). Of course, the new Windows Installation dit not like this. Apparantly Windows cannot be installed on a GPT hard disk. Remember I hadn't changed the partition table, was still factory default! Minus a few partitions, granted. So deleted ALL partittions, did a format of the disk, created a new partition. Et voila, Windows installation started. FINALLY! WONDROUS After the installation, Windows still had background images located in C:/Users/ ME /AppData/Local/Microsoft/Themes/RoamedThemeFiles/DesktopBackground/ that I had in the previous installation. Before doing: format, delete partition, cascade partitions, create new partition of different size, format partition, install Windows. It managed to keep the images through all that. Anyone got an idea on that one? It also remembered the settings for the Windows Aero theme... UPDATED QUESTION: After all this you'd think I'd have the rest figured out. Wrong. Ubuntu 12.10, 64 bit installation can't read the partitioning of the hdd during the installation. Any ideas on how to fix this so the install for a dual-boot system can proceed? (Preferably without starting anew with Windows as well ;) )

    Read the article

  • Making user input/math on data fast, unlike excel type programs

    - by proGrammar
    I'm creating a research platform solely for myself to do some research on data. Programs like excel are terribly slow for me so I'm trying to come up with another solution. Originally I used excel. A1 was the cell that contained the data and all other cells in use calculated something on A1, or on other cells, that all could be in the end traced to A1. A1 was like an element of an array, I then I incremented it to go through all my data. This was way too slow. So the only other option I found originally was to hand code in c# the calculations inside a loop. Then I simply recompiled each time I changed my math. This was terribly slow to do and I had to order everything correctly so things would update correctly (dependencies). I could have also used events, but hand coding events for each cell like calculation would also be very slow. Next I created an application to read Excel and to perfectly imitate it. Which is what I now use. Basically I write formulas onto a fraction of my data to get live results inside excel. Then my program reads excel, writes another c# program, compiles it, and runs that program which runs my excel created formulas through a lot more data a whole lot faster. The advantage being my application dependency sorts everything (or I could use events) so I don't have to (like excel does) And of course the speed. But now its not a single application anymore. Instead its 2 applications, one which only reads my formulas and writes another program. The other one being the result which only lives for a short while before I do other runs through my data with different formulas / settings. So I can't see multiple results at one time without introducing even more programs like a database or at least having the 2 applications talking to each other. My idea was to have a dll that would be written, compiled, loaded, and unloaded again and again. So a self-updating program, sort of. But apparently that's not possible without another appdomain which means data has to be marshaled to be moved between the appdomains. Which would slow things down, not for summaries, but for other stuff I need to do with all my data. I'm also forgetting to mention a huge problem with restarting an application again and again which is having to reload ALL my data into memory again and again. But its still a whole lot faster than excel. I'm really super puzzled as to what people do when they want to research data fast. I'm completely unable to have a program accept user input and having it fast. My understanding is that it would have to do things like excel which is to evaluate strings again and again. So my only option is to repeatedly compile applications. Do I have a correct understanding on computer science? I've only just began programming, and didn't think I would have to learn much to do some simple math on data. My understanding is its either compiling my user defined stuff to a program or evaluating them from a string or something stupid again and again. And my only option is to probably switch operating systems or something to be able to have a program compile and run itself without stopping (writing/compiling dll, loading dll to program, unloading, and repeating). Can someone give me some idea on how computers work? Is anything better possible? Like a running program, that can accept user input and compile it and then unload it later? I mean heck operating systems dont need to be RESTARTED with every change to user input. What is this the cave man days? Sorry, it's just so super frustrating not knowing what one can do, and can't do. If only I could understand and learn this stuff fast enough.

    Read the article

  • Data Formatters temporarily unavailable

    - by iphone newbie
    I'm currently working on an iPhone app. This app has a login screen, also a signup screen. After the user has successfully signed up, I dismiss the signup view, then the app automatically logs in using the created account. After which, the login view is dismissed, showing the main view. I'm trying to modify this by immediately dismissing the login view, since I already have the account details of the user when the signup is successful. Basically, the ideal flow is: after the user successfully signs up, I save the username and password in a singleton class, then dismiss the signup view. When I get to the parent view (which is the login screen), I have a variable that checks if there was a successful signup. If that variable is true, I want to immediately dismiss the login view. However, I come across this error message: Data Formatters temporarily unavailable, will re-try after a 'continue'. (Unknown error loading shared library "/Developer/usr/lib/libXcodeDebuggerSupport.dylib") I'm not really sure why this happens. I have no problems dismissing the login view when I go through the actual login procedure - which of course also dismisses the login view if the user inputs a correct username and password. I'm not exactly sure, but I'm starting to think that the iPhone cannot handle dismissing 2 view controllers almost at the same time. Is it possible that I'm dismissing the login view too quickly? Is that a factor? Is there anyway for me to be able to dismiss 2 view controllers almost simultaneously without coming across this error message?

    Read the article

  • Ext JS Tab Panel - Dynamic Tabs - Tab Exists Not Working

    - by Joey Ezekiel
    Hi Would appreciate if somebody could help me on this. I have a Tree Panel whose nodes when clicked load a tab into a tab panel. The tabs are loading alright, but my problem is duplication. I need to check if a tab exists before adding it to the tab panel. I cant seem to have this resolved and it is eating my brains. This is pretty simple and I have checked stackoverflow and the EXT JS Forums for solutions but they dont seem to work for me or I'm being blind. This is my code for the tree: var opstree = new Ext.tree.TreePanel({ renderTo: 'opstree', border:false, width: 250, height: 'auto', useArrows: false, animate: true, autoScroll: true, dataUrl: 'libs/tree-data.json', root: { nodeType: 'async', text: 'Tool Actions' }, listeners: { render: function() { this.getRootNode().expand(); } } }) opstree.on('click', function(n){ var sn = this.selModel.selNode || {}; // selNode is null on initial selection renderPage(n.id); }); function renderPage(tabId) { var TabPanel = Ext.getCmp('content-tab-panel'); var tab = TabPanel.getItem(tabId); //Ext.MessageBox.alert('TabGet',tab); if(tab){ TabPanel.setActiveTab(tabId); } else{ TabPanel.add({ title: tabId, html: 'Tab Body ' + (tabId) + '', closable:true }).show(); TabPanel.doLayout(); } } }); and this is the code for the Tab Panel new Ext.TabPanel({ id:'content-tab-panel', region: 'center', deferredRender: false, enableTabScroll:true, activeTab: 0, items: [{ contentEl: 'about', title: 'About the Billing Ops Application', closable: true, autoScroll: true, margins: '0 0 0 0' },{ contentEl: 'welcomescreen', title: 'PBRT Application Home', closable: false, autoScroll: true, margins: '0 0 0 0' }] }) Can somebody please help?

    Read the article

  • WPF MVVM ComboBox SelectedItem or SelectedValue not working

    - by cjibo
    Update After a bit of investigating. What seems to be the issue is that the SelectedValue/SelectedItem is occurring before the Item source is finished loading. If I sit in a break point and weight a few seconds it works as expected. Don't know how I'm going to get around this one. End Update I have an application using in WPF using MVVM with a ComboBox. Below is the ViewModel Example. The issue I'm having is when we leave our page and migrate back the ComboBox is not selecting the current Value that is selected. View Model public class MyViewModel { private MyObject _selectedObject; private Collection<Object2> _objects; private IModel _model; public MyViewModel(IModel model) { _model = model; _objects = _model.GetObjects(); } public Collection<MyObject> Objects { get { return _objects; } private set { _objects = value; } } public MyObject SelectedObject { get { return _selectedObject; } set { _selectedObject = value; } } } For the sake of this example lets say MyObject has two properties (Text and Id). My XAML for the ComboBox looks like this. XAML <ComboBox Name="MyComboBox" Height="23" Width="auto" SelectedItem="{Binding Path=SelectedObject,Mode=TwoWay}" ItemsSource="{Binding Objects}" DisplayMemberPath="Text" SelectedValuePath="Id"> No matter which way I configure this when I come back to the page and the object is reassembled the ComboBox will not select the value. The object is returning the correct object via the get in the property though. I'm not sure if this is just an issue with the way the ComboBox and MVVM pattern works. The text box binding we are doing works correctly.

    Read the article

  • WPF MVVM UserControl Binding "Container", dispose to avoid memory leak.

    - by user178657
    For simplicity. I have a window, with a bindable usercontrol <UserControl Content="{Binding Path = BindingControl, UpdateSourceTrigger=PropertyChanged}"> I have two user controls, ControlA, and ControlB, Both UserControls have their own Datacontext ViewModel, ControlAViewModel and ControlBViewModel. Both ControlAViewModel and ControlBViewModel inh. from a "ViewModelBase" public abstract class ViewModelBase : DependencyObject, INotifyPropertyChanged, IDisposable........ Main window was added to IoC... To set the property of the Bindable UC, i do ComponentRepository.Resolve<MainViewWindow>().Bindingcontrol= new ControlA; ControlA, in its Datacontext, creates a DispatcherTimer, to do "somestuff".. Later on., I need to navigate elsewhere, so the other usercontrol is loaded into the container ComponentRepository.Resolve<MainViewWindow>().Bindingcontrol= new ControlB If i put a break point in the "someStuff" that was in ControlA's datacontext. The DispatcherTimer is still running... i.e. loading a new usercontrol into the bindable Usercontrol on mainwindow does not dispose/close/GC the DispatcherTimer that was created in the DataContext View Model... Ive looked around, and as stated by others, dispose doesnt get called because its not supposed to... :) Not all my usercontrols have DispatcherTimer, just a few that need to do some sort of "read and refresh" updates./. Should i track these DispatcherTimer objects in the ViewModelBase that all Usercontrols inh. and manually stop/dispose them everytime a new usercontrol is loaded? Is there a better way?

    Read the article

  • ASP.NET websites under IIS 7.5 (Windows 7) running extremely slow

    - by emzero
    I've just installed Windows 7 x64 Ultimate on my desktop PC. I installed IIS, Visual Studio 2008, registered ASP.NET, etc. I have this ASP.NET 3.5 website I'm working on running EXTREMELY slow on this new IIS. On STA and PROD servers (Windows 2003 Server) and on my old XP/IIS 5.1 everything runs smoothly. A page which usually takes 1-2 seconds to load is taking 8 seconds!!! I saw this post on IIS forum. It says something about Vista/7 not pooling connections (just to let you know, the website is running locally but it's connecting to a SQL Server 2005 hosted on a remote server). It seems that it takes a while to "start loading" the page... I mean, I click refresh and it stays for several seconds "Waiting for localhost"... Then when it gets response it loads the whole page normally... I don't have a clue how to force Win7/IIS7.5 to pool database connections. EDIT: I've created a new empty ASP.NET web application to see if the problems happens too. The answer is no, it responds fast as it should with an empty default page. Maybe is something related to the DB connection. I will do a further test. It should be a way to fix it... EDIT 2: Debugging the app I noticed that the delay occurs AFTER the execution of .NET code (Page_Load, etc)... so the delay seems to be somewhere when IIS serves the page to the browser.

    Read the article

  • XamlParseException using Silverlight Toolkit control in Expression Blend

    - by Dan Auclair
    I am having a strange issue opening up my UserControl in Expression Blend when using a Silverlight Toolkit control. My UserControl uses the toolkit's ListBoxDragDropTarget as follows: <controlsToolkit:ListBoxDragDropTarget mswindows:DragDrop.AllowDrop="True" HorizontalContentAlignment="Stretch" VerticalContentAlignment="Stretch"> <ListBox ItemsSource="{Binding MyItemControls}" ScrollViewer.HorizontalScrollBarVisibility="Disabled"> <ListBox.ItemsPanel> <ItemsPanelTemplate> <controlsToolkit:WrapPanel/> </ItemsPanelTemplate> </ListBox.ItemsPanel> </ListBox> </controlsToolkit:ListBoxDragDropTarget> Everything works as expected at runtime and looks fine in Visual Studio 2008. However, when I try to open my UserControl in Blend I get XamlParseException: [Line: 0 Position: 0] and I can not see anything in the design view. More specifically Blend complains: The element "ListBoxDragDropTarget" could not be displayed because of a problem with System.Windows.Controls.ListBoxDragDropTarget: TargetType mismatch. My silverlight application is referencing System.Windows.Controls.Toolkit from the Nov. 2009 toolkit release, and I've made sure to include these namespace declarations for the ListBoxDragDropTarget: xmlns:controlsToolkit="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Toolkit" xmlns:mswindows="clr-namespace:Microsoft.Windows;assembly=System.Windows.Controls.Toolkit" If I comment out the ListBoxDragDropTarget control wrapper and just leave the ListBox I can see everything fine in the design view without errors. Furthermore, I realized this is happening with a variety of Silverlight Toolkit controls because if I comment out ListBoxDragDropTarget and replace it with <controlsToolkit:BusyIndicator /> the same exact error occurs in Blend. What is even weirder is that if I start a brand new silverlight application in blend I can add these toolkit elements without any kind of error, so it seems like something dumb that is happening with my project references to the toolkit assemblies. I'm pretty sure this has something to do with loading the default styles for the toolkit controls from its generic.xaml, since the error has to do with the TargetType and Blend is probably trying to load up the default styles. Has anyone encountered this issue before or have any ideas as to what may be my problem?

    Read the article

  • NullPointerException when showing JFileChooser

    - by Geo
    I show a JFileChooser with this snippet: public File getDestination() { JFileChooser chooser = new JFileChooser(); chooser.setFileSelectionMode(JFileChooser.DIRECTORIES_ONLY); int option = chooser.showSaveDialog(null); if(option == JFileChooser.APPROVE_OPTION) { return chooser.getSelectedFile().getAbsolutePath(); } return new File("."); } Usually, the first time it's showed, it displays & works correctly. The second time, it will always throw this exception: Exception in thread "Basic L&F File Loading Thread" java.lang.NullPointerException at sun.awt.shell.Win32ShellFolder2.pidlsEqual(Unknown Source) at sun.awt.shell.Win32ShellFolder2.equals(Unknown Source) at sun.awt.shell.Win32ShellFolderManager2.isFileSystemRoot(Unknown Source) at sun.awt.shell.ShellFolder.isFileSystemRoot(Unknown Source) at javax.swing.filechooser.FileSystemView.isFileSystemRoot(Unknown Source) at javax.swing.filechooser.WindowsFileSystemView.isTraversable(Unknown Source) at javax.swing.JFileChooser.isTraversable(Unknown Source) at javax.swing.plaf.basic.BasicDirectoryModel$LoadFilesThread.run0(Unknown Source) at javax.swing.plaf.basic.BasicDirectoryModel$LoadFilesThread.run(Unknown Source) Java -version says: java version "1.6.0_20" Java(TM) SE Runtime Environment (build 1.6.0_20-b02) Java HotSpot(TM) Client VM (build 16.3-b01, mixed mode, sharing) And the thread I found here says I should downgrade the Java version. Should I follow their advice, or is there something I could have done wrong?

    Read the article

  • getJSON not working if the mvc model view controller has a parameter

    - by Paul
    I'm having an issue with a callback. I'm not even getting an error in Firebug. If I alert before and after the getjson call both alerts show but the getjson call doesn't fire. public ActionResult TestPage() { return View(); } public ActionResult LoadMapLonLats(int mapId) { //some code return Json(_myMaps); } $("#Search").click(function() { $.getJSON("LoadMapLonLats", { mapId: 73 }, loadDbMap); }); function loadDbMap(maps) { alert('m'); $.each(maps, function(i) { alert(maps[i]); }); } As long as I leave TestPage without a parameter is works. If I add a parameter to TestPage(int id) then the call back to LoadMapLonLats doesn't work. Seems odd. Of course TestPage is the page I'm loading so I need to do some work here before rendering the page. Not sure why adding a parameter to the view would break the callback to another function. //this breaks he callback to LoadMapLonLats public ActionResult TestPage(int id) { return View(); } Any ideas? Seems like this may be related, if not sorry I can post a new thread.

    Read the article

  • How to test Rails 3 Engines with Cucumber & Rspec?

    - by cowboycoded
    I apologize if this question is slightly subjective... I am trying to figure out the best way to test Rails 3 Engines with Cucumber & Rspec. In order to test the engine a rails 3 app is necessary. Here is what I am currently doing: Add a rails test app to the root of the gem (myengine) by running: rails new /myengine/rails_app Add Cucumber to /myengine/rails_app/features as you would in a normal Rails app Require the Rails Engine Gem (using :path=>"/myengine") in /myengine/rails_app/Gemfile Add spec to the root directory of the gem: /myengine/spec Include the fixtures in /myengine/spec/fixtures and I add the following to my cuc env.rb: env.rb: Fixtures.reset_cache fixtures_folder = File.join(Rails.root, 'spec', 'fixtures') fixtures = Dir[File.join(fixtures_folder, '*.yml')].map {|f| File.basename(f, '.yml') } Fixtures.create_fixtures(fixtures_folder, fixtures) Do you see any problems with setting it up like this? The tests run fine, but I am a bit hesitant to put the features inside the test rails app. I originally tried putting the features in the root of the gem and I created the test rails app inside features/support, but for some reason my engine would not initialize when I ran the tests, even though I could see the app loading everything else when cuc ran. If anyone is working with Rails Engines and is using cuc and rspec for testing, I would be interested to hear your setup.

    Read the article

  • managing images in an iphone/ipad universal app

    - by taber
    Hi, I'm just curious as to what methods people are using to dynamically use larger or smaller images in their universal iPhone/iPad apps. I created a large test image and I tried scaling down (using cocos2d) by 0.46875. After viewing that in the iPhone 4.0 simulator I found the results were pretty crappy... rough pixel edges, etc. Plus, loading huge image files for iPhone users when they don't need them is pretty lame. So I guess what I will probably have to do is save out two versions of every sprite... large (for the iPad side) and small (for iPhone/iPod Touch) then detect the user's device and spit out the proper sprite like so: NSString *deviceType = [UIDevice currentDevice].model; CCSprite *test; if([deviceType isEqualToString:@"iPad"]) { test = [CCSprite spriteWithFile:@"testBigHuge.png"]; } else { test = [CCSprite spriteWithFile:@"testRegularMcTiny.png"]; } [self addChild: test]; How are you guys doing this? I'd rather avoid sprinkling all of my code with if statements like this. I want to also avoid using .xib files since it's an OpenGL-based app. Thanks!

    Read the article

  • PDF rendering crashes app Core Graphics

    - by Felixyz
    EDIT: The memory leaks turned out to be unrelated to the crashes. Leaks are fixed but crashes remain, still mysterious. My (iPhone) app does lots of PDF loading and rendering, some of it threaded. Sometime, it seems always after I flush a page cash after getting a memory warning, the app crashes with a bad access when trying to draw a pdf page stored in an NSData object. Here is one example trace: #0 0x3016d564 in CGPDFResourcesGetResource () #1 0x3016d58a in CGPDFResourcesGetResource () #2 0x3016d94e in CGPDFResourcesGetExtGState () #3 0x3015fac4 in CGPDFContentStreamGetExtGState () #4 0x301629a8 in op_gs () #5 0x3016df12 in handle_xname () #6 0x3016dd9e in read_objects () #7 0x3016de6c in CGPDFScannerScan () #8 0x30161e34 in CGPDFDrawingContextDraw () #9 0x3016a9dc in CGContextDrawPDFPage () But sometimes I get this instead: Program received signal: “EXC_BAD_ACCESS”. (gdb) bt #0 0x335625fa in objc_msgSend () #1 0x32c04eba in CFDictionaryGetValue () #2 0x3016d500 in get_value () #3 0x3016d5d6 in CGPDFResourcesGetFont () #4 0x3015fbb4 in CGPDFContentStreamGetFont () #5 0x30163480 in op_Tf () #6 0x3016df12 in handle_xname () #7 0x3016dd9e in read_objects () #8 0x3016de6c in CGPDFScannerScan () #9 0x30161e34 in CGPDFDrawingContextDraw () #10 0x3016a9dc in CGContextDrawPDFPage () Is this an indication that I've mistakenly deallocated an object? It's hard for me to decode what's happening here. This is how I create and retain the various objects involved: // Some data was just loaded from the network and is pointed to by "data" self.pdfData = data; _dataProviderRef = CGDataProviderCreateWithData( NULL, [_pdfData bytes], [_pdfData length], NULL ); _documentRef = CGPDFDocumentCreateWithProvider(_dataProviderRef); _pageRef = CGPDFDocumentGetPage(_documentRef, 1); CGPDFPageRetain(_pageRef); _pdfFrame = CGPDFPageGetBoxRect(_pageRef, kCGPDFArtBox); So the NSData object is retained, and I explicitly retain the page reference. The data provider and the document are already retained by the create-functions. And here is my dealloc method: -(void)dealloc { if (_pageRef) CGPDFPageRelease(_pageRef); if (_documentRef) CGPDFDocumentRelease(_documentRef); if (_dataProviderRef) CGDataProviderRelease(_dataProviderRef); self.pdfData = nil; [super dealloc]; } Am I doing anything wrong? Even an assurance that I'm not, with explanation, would be a help.

    Read the article

  • Greasemonkey @require jQuery not working "Component not available"

    - by Greg K
    I've seen the other question on here about loading jQuery in a Greasemonkey. Having tried that method, with this require statement inside my ==UserScript== tags: // @require http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js I still get the following error message in Firefox's error console: Error: Component is not available Source File: file:///Users/greg/Library/Application%20Support/ Firefox/Profiles/xo9xhovo.default/gm_scripts/myscript/jquerymin.js Line: 36 This stops my greasemonkey code from running. I've made sure I included the @require for jQuery and saved my js file before installing it, as required files are only loaded on installation. Code: // ==UserScript== // @name My Script // @namespace http://www.google.com // @description My test script // @include http://www.google.com // @require http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js // ==/UserScript== GM_log("Hello"); I have Greasemonkey 0.8.20091209.4 installed on Firefox 3.5.7 on my Macbook Pro, Leopard (10.5.8). I've cleared my cache (except cookies) and have disabled all other plugins except Flashblock 1.5.11.2, Web Developer 1.1.8 and Adblock Plus 1.1.3. My config.xml with my Greasemonkey script installed: <UserScriptConfig> <Script filename="myscript.user.js" name="My Script" namespace="http://www.google.com" description="My test script" enabled="true" basedir="myscript"> <Include>http://www.google.com</Include> <Require filename="jquerymin.js"/> </Script> I can see jquerymin.js sat in the gm_scripts/myscript/ directory. Additionally, is it common for this error to occur in the console when installing a Greasemonkey script? Error: not well-formed Source File: file:///Users/Greg/Documents/myscript.user.js Line: 1, Column: 1 Source Code: // ==UserScript==

    Read the article

  • Can plugins loaded with MEF resolve their own internal dependencies with the same MEF container for

    - by Dave
    From my experimentation, I think the answer is "kind of", but I could have made a mistake. I have an application that loads appliance plugins with MEF. That part is working fine. Now let's say that my BlenderAppliance wants to resolve several of its dependencies with MEF, which each implement IApplianceFeature. I've just used the ImportMany attribute to my plugin. I made sure to create the plugin using MEF so that the Imports work properly. I said "kind of" because some of the plugin's internals (i.e. the model) are loading with MEF just fine, but the IApplianceFeatures aren't. The difference here is that the IApplianceFeatures are themselves, assemblies. And at the moment, they are in one folder above that of the plugin itself, i.e. + application folder | IApplianceFeature1.dll | IApplianceFeature2.dll +---+ plugin folder | BlenderAppliance.dll Now if my application uses an AggregateCatalog to load the "." and ".\plugins" folders, why doesn't it ever load the IApplianceFeature assemblies for me? Is it possible / advisable to have the plugin create its own MEF container to resolve its dependencies, or does really nasty stuff happen? If you have any stories about this scenario, please share. :)

    Read the article

  • Managing Dependency Hell with WiX and C#

    - by Tom the Junglist
    We are on the eve of product launch, and at the last minute I am being bombarded with crash reports that appear to be related to our installer, which is a WiX3 project with separate outputs for x86 and x64 builds. These have been an ongoing problem that I always thought were fixed, only to find out that they were still lurking. The product itself is a collection of binaries that communicate with each other via .Net remoting, including a Windows Service and a small COM component that is loaded as an addon in another app. The service runs as SYSTEM, the COM piece runs in a low-rights context, while the other pieces run in normal user contexts. Other pieces include an third-party COM object library DLL and a shared DLL with the .net Remoting interfaces. I've observed flat-out weird behavior with MSI, particularly on version upgrades. Between MS' anal strong-name implementation (specifically, the exact version check before loading a given assembly), a documented WiX/MSI bug that sees critical files erased on upgrades (essentially, if a file in the upgrade MSI has the same version number as the existing install, that file is deleted), and having to work around Wow64 virtualization (x86 MSI can only write to registry/HD locations via Wow64, yet x64 MSIs cannot run on x86 computers...), I am about ready to trash the whole thing and port it over to a different install system. What I am looking for on tips + tricks, techniques, or suggestions on how to properly do things so that I am not fighting with Windows Installer's twisted sense of logic. I am tired of fighting with WiX/MSI/Windows Installer. All it needs to do is place files and registry keys where I tell it to, upgrade them when appropriate, and don't delete anything until the user uninstalls. Instead, dependencies are deleted willy-nilly, bringing up a whole bunch of uncatchable exceptions (can't wrap a try{} block around function declarations) and GPF'ing the whole app. I am particularly interested in 'best practices' and examples regarding shared and dependency DLLs, and any tips on making sure if a file needs to go to GAC, that it actually goes to the GAC and stays there until it is appropriate to remove it. Thanks! Tom

    Read the article

  • How much multiple style sheets slow down to website?

    - by metal-gear-solid
    Here is 3 css file (one is only for IE) <link rel="stylesheet" href="css/blueprint/screen.css" type="text/css" media="screen, projection"> <link rel="stylesheet" href="css/blueprint/print.css" type="text/css" media="print"> <!--[if lt IE 8]><link rel="stylesheet" href="css/blueprint/ie.css" type="text/css" media="screen, projection"><![endif]--> If i keep divide scree.css into these css in my website Now it will be 6 css ( one is only for IE) <link rel="stylesheet" href="css/blueprint/reset.css" type="text/css" media="screen, projection"> <link rel="stylesheet" href="css/blueprint/grid.css" type="text/css" media="screen, projection"> <link rel="stylesheet" href="css/blueprint/typography.css" type="text/css" media="screen, projection"> <link rel="stylesheet" href="css/blueprint/forms.css" type="text/css" media="screen, projection"> <link rel="stylesheet" href="css/blueprint/print.css" type="text/css" media="print"> <!--[if lt IE 8]><link rel="stylesheet" href="css/blueprint/ie.css" type="text/css" media="screen, projection"><![endif]--> If i go for method two for a website even after production . Does it really slowdown the website page loading speed? if yes then how much? How much these 3 extra style sheet will affect site performance?

    Read the article

  • Silverlight WCF serialization DataContract(IsReference=true) problem

    - by Ciaran
    Hi, I'm have a Silverlight 3 UI that access WCF services which in turn access respositories that use NHibernate. To overcome some NHibernate lazy loading issues with WCF I'm using my own DataContract surrogate as described here: http://timvasil.com/blog14/post/2008/02/WCF-serialization-with-NHibernate.aspx. In here I'm setting preserveObjectReferences = true My model contains cycles (i.e. Customer with IList[Order]) When I retrieve an object from my service it works fine, however when I try and send that same object back to the wcf service I get the error: System.ServiceModel.CommunicationException was unhandled by user code Message=There was an error while trying to serialize parameter http://tempuri.org/:searchCriteria. The InnerException message was 'Object graph ...' contains cycles and cannot be serialized if references are not tracked. Consider using the DataContractAttribute with the IsReference property set to true.' So cyclical references are now a problem in Silverlight, so I try change my DataContract to be [DataContract(IsReference=true)] but now when I try to retrieve an object from my service I get the following exception: System.ExecutionEngineException was unhandled Message=Exception of type 'System.ExecutionEngineException' was thrown. InnerException: Any ideas?

    Read the article

  • Silverlight WCF serialization [DataContract(IsReference=true)] problem

    - by Ciaran
    Hi, I'm have a Silverlight 3 UI that access WCF services which in turn access respositories that use NHibernate. To overcome some NHibernate lazy loading issues with WCF I'm using my own DataContract surrogate as described here: http://timvasil.com/blog14/post/2008/02/WCF-serialization-with-NHibernate.aspx. In here I'm setting preserveObjectReferences = true My model contains cycles (i.e. Customer with Collection). When I retrieve an object from my service it works fine, however when I try and send that same object back to the wcf service I get the error: System.ServiceModel.CommunicationException was unhandled by user code Message=There was an error while trying to serialize parameter http://tempuri.org/:searchCriteria. The InnerException message was 'Object graph ...' contains cycles and cannot be serialized if references are not tracked. Consider using the DataContractAttribute with the IsReference property set to true.' So cyclical references are now a problem in Silverlight, so I try change my DataContract to be [DataContract(IsReference=true)] but now when I try to retrieve an object from my service I get the following exception: System.ServiceModel.CommunicationException was unhandled by user code Message=The remote server returned an error: NotFound. It shouldn't be this hard to do something so trivial...

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >