Search Results

Search found 4951 results on 199 pages for 'beta tester ben'.

Page 119/199 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • Should one reject over-scoped projects?

    - by Little Child
    I spoke to my first potential client today and he told me about the requirements of his project - an Android app. He is a well-known designer / photographer in my country and now wants me to "convert the website into an app, custom-tailored". So the requirements, details stripped out, are as follows: eCommerce Aggregating all his content like videos, blogs, tweets, etc. into the app Live streaming any of his studio demos Augmented reality. So that people can see what his painting will look like on their wall before they buy it Taxi Sharing Now, for a freelance project, it seems too over-scoped. I am not saying that I cannot do it. I can. But let me be realistic: There is a steep learning curve when it comes to VR. I am not a tester. I have never white-box tested my own apps. I always black-box test. Since he is a renowned artist, something short of perfect might harm his public image So, I asked him for 2 weeks' worth of time before I give him the final answer. Now knowing whom to consult for advise, I am posting the question here. Although interesting and personally challenging, I am split-minded about accepting a project like this. I will be the only developer for this. Should one reject a project that seems to be over-scoped for one's own abilities?

    Read the article

  • Software Manager who makes developers do Project Management

    - by hdman
    I'm a software developer working in an embedded systems company. We have a Project Manager, who takes care of the overall project schedule (including electrical, quality, software and manufacturing) hence his software schedule is very brief. We also have a Software Manager, who's my boss. He makes me write and maintain the software schedule, design documents (high and low level design), SRS, change management, verification plans and reports, release management, reviews, and ofcourse the software. We only have one Test Engineer for the whole software team (10 members), and at any given time, there are a couple of projects going on. I'm spending 80% of my time making these documents. My boss comes from a Process background, and believes what we need is better documentation to improve software: (1) He considers the design to be paramount, coding is "just writing the design down", it shouldn't take too long, and "all the code should be written before the hardware is ready". (2) Doesn't understand the difference between a Central & Distributed Version control, even after we told him its easier to collaborate with a distributed model. (3) Doesn't understand code, and wants to understand every bug and its proposed solution. (4) Believes verification should be done by developer, and validation by the Tester. Thing is though, our verification only checks if implementation is correct (we don't write unit tests, its never considered in the schedule), and validation is black box testing, so the units tests are missing. I'm really confused. (1) Am I responsible for maintaining all these documents? It makes me feel like I'm doing the Software Project Management, in essence. (2) I don't really like creating documents, I want to solve problems and write code. In my experience, creating design documents only helps to an extent, its never the solution to better or faster code. (3) I feel the boss doesn't really care about making better products, but only about being a good manager in the eyes of the management. What can I do?

    Read the article

  • MySQL Connector/Net 6.6.2 has been released

    - by fernando
    MySQL Connector/Net 6.6.2, a new version of the all-managed .NET driver for MySQL has been released.  This is the first of two beta releases intended to introduce users to the new features in the release.  This release is feature complete it should be stable enough for users to understand the new features and how we expect them to work.  As is the case with all non-GA releases, it should not be used in any production environment.  It is appropriate for use with MySQL server versions 5.0-5.6 It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.6 version of MySQL Connector/Net brings the following new features:   * Stored routine debugging   * Entity Framework 4.3 Code First support   * Pluggable authentication (now third parties can plug new authentications mechanisms into the driver).   * Full Visual Studio 2012 support: everything from Server Explorer to Intellisense & the Stored Routine debugger. Stored Procedure Debugging ------------------------------------------- We are very excited to introduce stored procedure debugging into our Visual Studio integration.  It works in a very intuitive manner by simply clicking 'Debug Routine' from Server Explorer. You can debug stored routines, functions & triggers. Some of the new features in this release include:   * Besides normal breakpoints, you can define conditional & pass count breakpoints.   * Now the debugger editor shows colorizing.   * Now you can change the values of locals in a function scope (previously caused deadlock due to functions executing within their own transaction).   * Now you can also debug triggers for 'replace' sql statements.   * In general anything related to locals, watches, breakpoints, stepping & call stack should work in a similar way to the C#'s Visual Studio debugger. Some limitations remains, due to the current debugger architecture:   * Some MySQL functions cannot be debugged currently (get_lock, release_lock, begin, commit, rollback, set transaction level)..   * Only one debug session may be active on a given server. The Debugger is feature complete at this point. We look forward to your feedback. Documentation ------------------------------------- The documentation is still being developed and will be readily available soon (before Beta 2).  You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support! 

    Read the article

  • Necessary Infrastructure for large project with many components communicating through IPCs

    - by jluzwick
    I have a fairly in depth question which probably doesn't have an exact answer. As a software engineer, I am usually tasked with working on a program or project with minimal understanding of how other components or programs in the project interact with each other. When one program fails in a sea of multiple components and processes, what infrastructure elements are necessary to ensure that the problem can be accurately tracked to the violating application? More specifically, what infrastructure elements should be necessary for this large project and which are optional but very helpful. One such example I can think of is some form of a common logging infrastructure that allows for a developer or tester to easily browse through a log that contains numerous components for messages that might allude to the culprit program along with a "trail" of what happened before the issue occurred. I'm thinking of something similar to Androids alogcat tool. These necessary infrastructure elements should be language-agnostic. While these elements should be understood by all engineers on the team in question, which elements should be understood at great detail by the technical system engineers and what should the individual software engineers be responsible for adding to their tools to allow for such infrastructures to take hold? Please feel free to ask for clarification if something does not make sense as I understand this question is very broad and needs some refinement. I will refine as necessary from the answers and comments I receive. Thanks for any help!

    Read the article

  • Deployment from OVA format

    - by Manvendra Bele
    I am deploying a VM using a OVA format. The size of OVA format is 57 GB. Currently free space on my datastore is 388 GB. At the time of selecting Disk Format type if shows me in red that the disk size required is 1 TB therefore you cannot select THICK provisioning. Therefore, i selected THIN provisiong. It THIN provisioing i am showed that Estimated Disk Usage is 112 GB which is less than the free space available. But even after selecting THIN proviosing at the time of deployment it throws an error that it cannot create disk as the size of disk is larger than the maximum specified limit. My block size is of 1 MB. Pasting my exact error here: Failed to deploy OVF package:File [datastore1] IMS Tester 1/IMS Tester1_2.vmdk is larger than maximum size supported by datastore 'datastore1

    Read the article

  • Router won't connect to computer over cable

    - by yyy
    I have a Thompson 784 router that connects to the Internet. It works nicely enough. What is wrong is that I have a shielded Cat 5e cable that goes from my room to the router that's in a another room (~30 metres). Both ends are the same; both ends are done nicely; I've tested it with a cable tester and every wire connects. However, when i connect my computer to my router neither one will find the other. So would anyone be nice and please explain what is wrong. p.s.: I have tried many different cable ends (currently it on standard/A), the cable is tested both ways (as it should be) and yes I'm at my wit's end =(

    Read the article

  • Windows 7 + NVidia == 640x480 on external projectors

    - by ben2004uk
    Hello, I'm having a major problem and I really need your help! I have a MacBook Air which has the NVidia 9400M. The problem is, when I connect it to an external projector Windows 7 (and Vista) only allow me to connect at 640x480. I need it to use the laptop to present at conferences, as such I need it to work on a number of different projectors - at the moment I need to use OSX and VMWare but it's painfully slow and really doesn't work. At the moment, I'm considering buying a new laptop :( Is there any way to override the screen resolutions provided? I've saw some information around EDID? Thanks Ben

    Read the article

  • Strange traffic on fresh Ubuntu Server install

    - by Fishy
    I've just installed Ubuntu Server on my home box after becoming partially familiar with it at work and wanting to train up as a Pen Tester. I installed the latest version on a logical partition (the main one contained Win7), and selected none of the extra modules (I think). I installed ngrep and fired it up (along with TCPdump) and immediately saw some strange traffic which I am unable to identify. My pc is sending out UDP packets every couple of seconds to a seemingly random series of IP addresses, all on the same port (47669 - though I did also see it use another port for a while). I watched it do this for about 20 mins, whilst trying to work out why it was doing it. The only other traffic was the odd ARP request for the router and SSDP UPnP broadcasts from the router. Anyone know what this is, or have any advice on how best to find out? Thanks. EDIT: Actually, it's not my box generating the traffic. It's receiving the traffic on that port, from a series of IP addresses, and returning 'port unreachable' messages.

    Read the article

  • Hosted Monitoring

    - by Grant Fritchey
    The concept of using services to take the place of writing a lot of your own code goes way, way back in computing history. The fundamentals of the concept go back to the dawn of computing with places like IBM hosting time-shares for computing power that you could rent for short periods of time. But things really took off with the building of the Web. Now, all the growth with virtual machines, hosted machines, hosted services from vendors like Amazon and Microsoft, the need to keep all of your software locally on physical boxes is just going the way of the dodo. There will likely always be some pieces of software that you keep on machines on your property or on your person, but the concept of keeping fundamental services locally is going away. As someone put it to me once, if you were starting a business right now, would you bother setting up an Exchange server to manage your email or would you just go to one of the external mail services for everything? For most of us (who are not Exchange admins) the answer is pretty easy. With all this momentum to having external services manage more and more of the infrastructure that’s not business unique, why would you burn up a server and license instance setting up monitoring for your SQL Servers? Of course, some of you are dealing with hyper-sensitive data that might require, through law or treaty, that you lock it down and never expose it to the intertubes, but most of us are not. So, what if someone else took on the basic hassle of setting up monitoring on your systems? That’s what we’re working on here at Red Gate. Right now it’s a private test, but we’re growing it and developing it and it’ll be going to a public beta, probably (hopefully) this year. I’m running it on my machines right now. The concept is pretty simple. You put a relay on your server, poke a hole in your firewall for it, and we start monitoring your server using SQL Monitor. It’s actually shocking how easy it is to get going. You still have to adjust your alerting thresholds, but that’s a standard part of alerting. Your pain threshold and my pain threshold for any given alert may be different. But from there, we do all the heavy lifting, keeping your data online and available, providing you with access to the information about how your servers are behaving, everything. Maybe it’s just me, but I’m really excited by this. I think we’re getting to a place where we can really help the small and medium sized businesses get a monitoring solution in place, quickly and easily. All you crazy busy, and possibly accidental, DBAs and system admins finally can set up monitoring without taking all the time to configure systems, run installs, and all the rest. You just have to tweak your alerts and you’re ready to run. If you are interested in checking it out, you can apply for the closed beta through the Monitor web page.

    Read the article

  • mod_rewrite not working after upgrade to 12.10

    - by CrowderSoup
    I'm hoping this is a quick and simple fix and that I just need a fresh set of eyes. However, I'm fearful that it might actually be an error in the latest build of the rewrite module. I have a .htaccess file that turns on the rewrite engine (I've made sure the module is enabled), creates some rewrite conditions, and finally a rewrite rule. Here's my .htaccess file for reference: <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?request=$1 [L,QSA,NC] </IfModule> Now for the problem: if I go to hostname.com it works fine. If I go to hostname.com/Index it works fine. However, if I go to hostname.com/index it doesn't rewrite the request and I get a 404. I'm not sure what's going on here. I've used a rewrite rule tester and there doesn't appear to be any issues with my rewrite rule itself. Again, this issue didn't manifest until after I upgraded to 12.10, at which point I know that Apache was updated. Any thoughts? Has anyone else here experienced this? I know that two other people besides myself have experienced this here. Thanks in advance for any help you can provide!

    Read the article

  • Problems with jQuery getJSON using local files in Chrome

    - by Tauren
    I have a very simple test page that uses XHR requests with jQuery's $.getJSON and $.ajax methods. The same page works in some situations and not in others. Specificially, it doesn't work in Chrome on Ubuntu. I'm testing on Ubuntu 9.10 with Chrome 5.0.342.7 beta and Mac OSX 10.6.2 with Chrome 5.0.307.9 beta. It works correctly when files are installed on a web server from both Ubuntu/Chrome and Mac/Chrome (try it out here). It works correctly when files are installed on local hard drive in Mac/Chrome (accessed with file:///...). It FAILS when files are installed on local hard drive in Ubuntu/Chrome (access with file:///...). The small set of 3 files can be downloaded in a tar/gzip file from here: http://issues.tauren.com/testjson/testjson.tgz When it works, the Chrome console will say: XHR finished loading: "http://issues.tauren.com/testjson/data.json". index.html:16Using getJSON index.html:21 Object result: "success" __proto__: Object index.html:22success XHR finished loading: "http://issues.tauren.com/testjson/data.json". index.html:29Using ajax with json dataType index.html:34 Object result: "success" __proto__: Object index.html:35success XHR finished loading: "http://issues.tauren.com/testjson/data.json". index.html:46Using ajax with text dataType index.html:51{"result":"success"} index.html:52undefined When it doesn't work, the Chrome console will show this: index.html:16Using getJSON index.html:21null index.html:22Uncaught TypeError: Cannot read property 'result' of null index.html:29Using ajax with json dataType index.html:34null index.html:35Uncaught TypeError: Cannot read property 'result' of null index.html:46Using ajax with text dataType index.html:51 index.html:52undefined Notice that it doesn't even show the XHR requests, although the success handler is run. I swear this was working previously in Ubuntu/Chrome, and am worried something got messed up. I already uninstalled and reinstalled Chrome, but that didn't help. Can someone try it out locally on your Ubuntu system and tell me if you have any troubles? Note that it seems to be working fine in Firefox.

    Read the article

  • Migrating complex SVN branch hierarchy to Mercurial

    - by Christian Hang
    Our team has been using SVN for managing an application of decent size and over time a rather complex hierarchy of branches and tags has built up, which is following the basic standard layout for SVN repositories, but is more nested: |-trunk |-branches | |-releases | | |-releaseA | | `-releaseB | `-features | |-featureX | `-featureY |-tags |-releaseA | |-beta | `-RTP `-releaseB |-beta `-RTP (The feature branches are obviously temporary branches but we have to take them into consideration as it won't be feasible to close all of them at once in the near future) For several reasons but primarily because merges have been becoming an increasing pain, we are considering to switch to Mercurial. The main problem we are currently facing is migrating the existing code base without losing our history. I've tried several migration tools (e.g., yasvn2hg, hg convert and svn2hg) with yasvn2hg being the most promising, but none of them seem to be able to deal with nested hierarchies but they all assume that branches and tags are organized in one flat directory respectively. The choice between named branches or clones as the conversion target of old SVN branches is not a limiting factor in this case, as either solution would be appreciated. We are currently experimenting with both options and how they would fit into our current processes but haven't decided on one yet. I'd obviously be interested in recommendations or experiences with similar setups concerning that issue as well. So, what is the best way to convert a nested SVN branch hierarchy like this to Mercurial? Converting one branch at a time into a separate repository would be quite annoying and I am not sure if that would be the right approach in the first place, depending on how the tools handle historic merges and need to be aware of all other branches?

    Read the article

  • How to prove writing specifications beats code cowboys?

    - by Andrew Grant
    So I have a problem. Or rather my friend has a problem, since I would never write about my company on an internet forum. At my friend's company specification writing is, shall we say, a little underused. There's a deeply ingrained culture of writing code first and asking questions later, whether it's for a library routine or a new tool to inflict on their long suffering designers. This of course leads to situations where functionality is partially correct, incorrect, or just completely missing ("oh, just save before trying anything you may want to undo"). This usually results in a loss of productivity for those poor designers, or beta periods where bug-fixing is largely spent implementing things correctly. My friend's found his suggestions of writing (and testing against) specifications to be generally well received. Most of his colleagues have embraced the wonderful feeling of discovering false-assumptions on paper, instead of at 11pm on a Sunday in the middle of beta. Viva La Revolution! However there are a few who poo-poo anything that stands between their task and a keyboard. They laugh at the thought of actually designing anything, and write code with merry abandon. Mostly these are senior, long employed developers, reluctant to "waste time". The problem is that this second group of heretics invariably produce things (or at least something) quicker than the first. Subsequently this becomes justification along the lines of "It's pointless to write specifications for something as simple as an image resizer! Oh and those bugs where width!=height or the image uses RLE just need a few tweaks". And now the question :) Other than saying "told you so" at the end of a project, what are some good short-term ways to demonstrate how the practice of writing functional or technical specifications leads to better software in the long run? Cheers!

    Read the article

  • ASP.NET MVC2: Client-side validation not working with Start.js

    - by Shaggy13spe
    Ok, this is strange. I would hope it's something I'm doing wrong and not that MS has two technologies that simply don't work together. (UPDATE: See bottom of post for Script tag order in HEAD section) I'm trying to use the dataView template and client-side validation. If I include a reference to: <script src="http://ajax.microsoft.com/ajax/beta/0911/Start.js" type="text/javascript"></script> by itself, the dataview template works fine. but if I put in the following references: <script src="http://ajax.microsoft.com/ajax/jquery.validate/1.7/jquery.validate.min.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftAjax.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftMvcAjax.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftMvcValidation.js" type="text/javascript"></script> then I get the following error: > Error: Type._registerScript is not a > function Source File: > http://ajax.microsoft.com/ajax/beta/0911/MicrosoftAjaxTemplates.js > Line: 1 and: > Error: Sys.get("$listings") is null > Source File: > http://localhost:12370/Listings Line: > 76 Here's the code calling the dataview: $(document).ready(function () { LoadMap(); Sys.require([Sys.components.dataView, Sys.scripts.jQuery], function() { $("#listings").dataView(); Sys.get("$listings").set_data(listings.Data); updateMap(listings.Data); }); }); I would really appreciate any help with this one. Thanks! UPDATE: I've tried moving around the order of the last 4 script tags, but to no avail.

    Read the article

  • How to tell if Microsoft Works is 32 or 64 bit? Please Help!

    - by Bill Campbell
    Hi, I am trying to convert one of our apps to run on Win7 64 bit from XP 32 bit. One of the things that it uses is Excel to import files. It's a little complicated since it was using Microsoft.Jet.OLEDB.4.0 (Excel). I found Office 14 (2010) has a 64bit version I can download. I downloaded Office 2010 Beta but it didn't seem to install Microsoft.ACE.OLEDB.14.0. I found that I could download 2010 Office System Driver Beta: Data Connectivity Components which has the ACE.OLEDB.14 in it but when I try to install it, the installed tells me "You cannot install the 64-bit version of Access Database engine for Microsoft Office 2010 because you currently have 32-bit Office products installed". How do I determine what 32bit office products this is reffering to? My Dell came with Microsoft Works installed. I don't know if this is 32 or 64 bit. Is there anyway to tell? I don't want to uninstall this if it's not the problem and I'm not sure what else might be the problem. Any help would be appreciated! thanks, Bill

    Read the article

  • MS Ajax FilteredTextBox stopped working

    - by TenaciousImpy
    Hi, I'm using Microsoft's AJAX library via their CDN. Everything was going fine in an old project. I then created a new project and copied and pasted some of the AJAX code hoping that it would work. However, my code isn't working and I've been struggling to figure out why. I'm trying to implement a filtered textbox: <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="http://ajax.microsoft.com/ajax/beta/0911/Start.debug.js" type="text/javascript"></script> <script src="http://ajax.microsoft.com/ajax/beta/0911/extended/ExtendedControls.debug.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function() { Sys.require(Sys.components.filteredTextBox, function() { $("[id$='UserName']").filteredTextBox({ InvalidChars: ".", FilterType: Sys.Extended.UI.FilterTypes.Numbers | Sys.Extended.UI.FilterTypes.UppercaseLetters | Sys.Extended.UI.FilterTypes.LowercaseLetters | Sys.Extended.UI.FilterTypes.Custom, FilterMode: Sys.Extended.UI.FilterModes.ValidChars }); }); }); </script> When I run this, the textbox isn't filtered and I can enter any character. In Chrome, I used the Dev Tools and it found two errors on the page. In my .aspx page, I have: Uncaught TypeError: Cannot read property 'Numbers' of undefined (regarding the Sys.Extended.UI.FilterTypes.Numbers in the code above). In filteredtextboxbehavior.debug.js, I have the error: Uncaught TypeError: Object #<an Object> has no method 'registerComponent' Any ideas why I'm getting this error? The code has been working previously so I'm not sure if I had added any extra settings in my old project which the new one doesn't have. Thanks! ps:The textbox can be found as any other actions on it are recognized.

    Read the article

  • Problems with Merb on Snow Leopard

    - by hamhoagie
    I've recently started looking at Merb, for use with some small projects around the office. I'm trying to set up my first project following the docs, and am encountering an exception such as: foo:beta user$ merb Merb root at: /Users/user/code/merb/beta Loading init file from ./config/init.rb Loading ./config/environments/development.rb ~ Connecting to database... ~ Loaded slice 'MerbAuthSlicePassword' ... ~ Parent pid: 39794 ~ Compiling routes... ~ Activating slice 'MerbAuthSlicePassword' ... ~ ~ FATAL: Mongrel is not installed, but you are trying to use it. You need to either install mongrel or a different Ruby web server, like thin. I have installed Mongrel from gem as well as from MacPorts, and am confused by this exception. Significant stats: ruby 1.8.7 (2010-01-10 patchlevel 249) [i686-darwin10] From my installed gems: merb (1.1.0) merb-action-args (1.1.0) merb-assets (1.1.0) merb-auth (1.1.0) merb-auth-core (1.1.0) merb-auth-more (1.1.0) merb-auth-slice-password (1.1.0) merb-cache (1.1.0) merb-core (1.1.0) merb-exceptions (1.1.0) merb-gen (1.1.0) merb-haml (1.1.0) merb-helpers (1.1.0) merb-mailer (1.1.0) merb-param-protection (1.1.0) merb-slices (1.1.0) merb_datamapper (1.1.0) mongrel (1.1.5) Merb documentation is non-existent, so I find myself stuck. Thanks in advance.

    Read the article

  • What's the expectation for silverlight 2's lifespan?

    - by dwidel
    Way back when I built an internal site for a customer using silverlight 2. They've been happy with it and I've barely had to touch it. Is the expectation that this site will always work? What I'm afraid of is suddenly getting a call years from now that the users installed silverlight X and now it's broken and I'm going to have to convert it immediately past Y versions of Silverlight to get the site back up, and I don't even do Silverlight anymore. I already went through this once when it went from 2 beta to 2 release and and scrambling to fix all the breaking changes and get the site back up. It wasn't as big a deal then as we were in beta anyway. I could upgrade now but it would be really tough to go back to the customer and ask for money to do an upgrade when they are happy now and will see no noticeable benefit from staying current. Additionally there are some third party controls that would have to be licensed again. So I guess what I'm asking is there a known end of life? Or do we just play it by ear?

    Read the article

  • Error building Visual Studio 2010 Silverlight 4 projects on Windows 7 with XP Mode

    - by Kevin Dente
    I installed Visual Studio 2010 Beta 2 in an XP Mode VM on Windows 7. Then I created a trivial Silverlight 4 (beta) project and tried to build it. I get the following error: Error 1 The "ValidateXaml" task failed unexpectedly. System.IO.FileLoadException: Could not load file or assembly 'file://\tsclient\d\Users\me\Documents\Visual Studio 2010\Projects\SilverlightApplication2\SilverlightApplication2\obj\Debug\SilverlightApplication2.dll' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515) File name: 'file://\tsclient\d\Users\me\Documents\Visual Studio 2010\Projects\SilverlightApplication2\SilverlightApplication2\obj\Debug\SilverlightApplication2.dll' --- System.NotSupportedException: An attempt was made to load an assembly from a network location which would have caused the assembly to be sandboxed in previous versions of the .NET Framework. This release of the .NET Framework does not enable CAS policy by default, so this load may be dangerous. If this load is not intended to sandbox the assembly, please enable the loadFromRemoteSources switch. See http://go.microsoft.com/fwlink/?LinkId=155569 for more information. at System.Reflection.RuntimeAssembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoadFrom(String assemblyFile, Evidence securityEvidence, Byte[] hashValue, AssemblyHashAlgorithm hashAlgorithm, Boolean forIntrospection, Boolean suppressSecurityChecks, StackCrawlMark& stackMark) at System.Reflection.Assembly.LoadFrom(String assemblyFile) at Microsoft.Silverlight.Build.Tasks.ValidateXaml.XamlValidator.Execute(ITask task) at Microsoft.Silverlight.Build.Tasks.ValidateXaml.XamlValidator.Execute(ITask task) at Microsoft.Silverlight.Build.Tasks.ValidateXaml.Execute() at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute() at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask, Boolean& taskResult) I believe this is related to the fact that XP Mode redirects the My Documents folder to the host, turning it into a network share location, and some sort of CAS / security policy is being triggered. Anyone know how to fix it?

    Read the article

  • iPhone 4.0 on iPhone but still Ad-Hoc compile for 3.1.3?

    - by Mark
    Device: Version : 3.1 Build: 3511 Device: iPhone OS: iPhone OS 4.0 xCode 3.2.2 (Old) xCode 3.2.3 (New; For iPhone 4.0 Beta) Background: As you can see I installed 4.0 on my iPhone as I read on this forum it's really hard to near impossible to downgrade back to 3.1.3, but it's my only device I have and use for development. When I try to continue to develop and build with the old xCode it tells me that "No provisioned iPhone OS device is connected". When I select Simulator it does compile and build, however when I spread this file it does not work on the devices of my testers, they get a Signed error. When I run the new xCode, it does compile and build on the Device and when I spread this file, it does work on the devices of my testers (which are running the current official version 3.1.3). Questions: Why is there a difference between building for Simulator and Device? A simulator build never seem to work on the devices of my testers because of signing issues and the build for device does work. Currently it seems the old xCode became useless, however I read that you may not use the Beta xCode to build your application for release. So knowing the above how am I able to pull this off with my current setup due the fact the old xCode won't let me build properly.

    Read the article

  • Nested Object Forms not working as expected

    - by Craig Walker
    I'm trying to get a nested model forms view working. As far as I can tell I'm doing everything right, but it still does not work. I'm on Rails 3 beta 3. My models are as expected: class Recipe < ActiveRecord::Base has_many :ingredients, :dependent => :destroy accepts_nested_attributes_for :ingredients attr_accessible :name end class Ingredient < ActiveRecord::Base attr_accessible :name, :sort_order, :amount belongs_to :recipe end I can use Recipe.ingredients_attributes= as expected: recipe = Recipe.new recipe.ingredients_attributes = [ {:name=>"flour", :amount=>"1 cup"}, {:name=>"sugar", :amount=>"2 cups"}] recipe.ingredients.size # -> 2; ingredients contains expected instances However, I cannot create new object graphs using a hash of parameters as shown in the documentation: params = { :name => "test", :ingredients_attributes => [ {:name=>"flour", :amount=>"1 cup"}, {:name=>"sugar", :amount=>"2 cups"}] } recipe = Recipe.new(params) recipe.name # -> "test" recipe.ingredients # -> []; no ingredient instances in the collection Is there something I'm doing wrong here? Or is there a problem in the Rails 3 beta?

    Read the article

  • Why do I get "Error 6060" when I try to use DBD::Advantage with a 64-bit perl on Linux?

    - by WarheadsSE
    I realize that I am attempting to go beyond the "supported" behavior of the manf's released drivers for Perl, after all they have only released it in package with x86 .so's. However, since I cannot use their package with x64 Perl on a RHEL 5.4 x86_64 box, and maintaining a seperate install of x86 Perl just for this one package, I have made an attempt to get this puppy working thanks to released 64-bit .so's that accompany other driver packages for Advantage. What I have done to this point: download beta 10 DBI drivers, in 32 download beta 10 PHP extension (it contains 32 and x86_64) copy the required DLLs into the ads-lib location (eg /usr/local/ads/lib64) compile the Perl DBI driver with the path to the lib64's .so's Good compilation, good install, good use. The problem is that I always get : failed: [iAnywhere Solutions][Advantage SQL][ASA] Error 6060: Advantage Database Server not available on specified server. axServerConnect (SQL-HY000)(DBD: db_login/SQLConnect err=-1) Does anyone have any ideas? EDIT: fixed package name in post title EDIT: Updated title. It appears that it's not just the x64 perl, but the RHEL 5.4 underneath that may be interfering. As commented below, I managed to shoe-horn a x86 perl onto the system, and compile the DBD::Advantage 9.99, and later replacing that with 9.10, and none of these x86 would connect either. Neither library (9.99 or 9.10) in either bit-edness will connect from this x86_64 server to the windows server's UNC path. I have successfully mounted this share without problems, but still I cannot seem to connect to the 9.1. I have tried: \hostname\PATH \FQDN\PATH \IP\PATH and all of these variations with the port (default) 6262 included. My windows machine connects fine, with both 9.1 and 9.99 from strawberry perl.

    Read the article

  • Merging datasets with 2 different time variables in SAS

    - by John
    Hye Guys, for those regularly browsing this site sorry for already another question (however I did solve my last question myself!) I have another problem with merging datasets, it seems that accounting for time in datasets is a real pain in the ass. I succesfully managed to merge on months in my previous datasets, however it seems I have a final dataset which only has quarter as a time count variable. So where all my normal databases have month 1- xxx as an indicator of time, this database had quarter as an indicator of time. I still want to add the variables of this last database, let's call it TVOL, into my WORK database. Quick summary QUARTER: Quarter 0 = JAN1996-MAR1996 Month: Month 0 = JAN1996 Example: TVOL TVOL _ Ticker ____ Quarter 1500 _ AA ________ -1 52546 _ BB ________ 15 Example: WORK BETA _ Ticker ____ Month 1.52 _ AA ________ 2 1.54__ BB _______ 3 Example: Merged: BETA ______ TVOL __ Ticker ____ Month 1.52 _______ 500 ___ AA _______ 2 I now want to merge this 2 tables using following relationship if the month is in quarter 1, the data of quarter 0 has to be used, so if i have an observation i nWORK with date 2FEB1996 the TVOL of quarter -1 has to be put behind this observation. Something like IF month = quarter i use data quarter i-1. Also, as TVOL is measured quarterly and I have to put in monthly I have to take the average, so (TVOL/3) should be added as a variable. Thanks!

    Read the article

  • How to suppress/control logging of Wagon-FTP Maven extension?

    - by Vincenzo
    I'm deploying Maven site by FTP, using Wagon-FTP. Works fine, but output is full of FTP connection/authentication details, which effectively expose logins and passwords to everybody (especially if the project is open source and its CI protocols are publicly accessible): [...] [INFO] [INFO] --- maven-site-plugin:3.0-beta-3:deploy (default-deploy) @ rempl --- Reply received: 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- 220-You are user number 1 of 50 allowed. 220-Local time is now 09:08. Server port: 21. 220 You will be disconnected after 15 minutes of inactivity. Command sent: USER **** Reply received: 331 User **** OK. Password required Command sent: PASS ******** Reply received: 230-User **** has group access to: *** 230 OK. Current restricted directory is / [...] Is it possible to suppress this logging? Or configure it... This is a section of my pom.xml, where Wagon-FTP is used: [...] <build> <extensions> <extension> <groupId>org.apache.maven.wagon</groupId> <artifactId>wagon-ftp</artifactId> <version>1.0-beta-7</version> </extension> </extensions> [...] </build> [...]

    Read the article

  • Handling file uploads with JavaScript and Google Gears, is there a better solution?

    - by gnarf
    So - I've been using this method of file uploading for a bit, but it seems that Google Gears has poor support for the newer browsers that implement the HTML5 specs. I've heard the word deprecated floating around a few channels, so I'm looking for a replacement that can accomplish the following tasks, and support the new browsers. I can always fall back to gears / standard file POST's but these following items make my process much simpler: Users MUST to be able to select multiple files for uploading in the dialog. I MUST be able to receive status updates on the transmission of a file. (progress bars) I would like to be able to use PUT requests instead of POST I would like to be able to easily attach these events to existing HTML elements using JavaScript. I.E. the File Selection should be triggered on a <button> click. I would like to be able to control response/request parameters easily using JavaScript. I'm not sure if the new HTML5 browsers have support for the desktop/request objects gears uses, or if there is a flash uploader that has these features that I am missing in my google searches. An example of uploading code using gears: // select some files: var desktop = google.gears.factory.create('beta.desktop'); desktop.openFiles(selectFilesCallback); function selectFilesCallback(files) { $.each(files,function(k,file) { // this code actually goes through a queue, and creates some status bars // but it is unimportant to show here... sendFile(file); }); } function sendFile(file) { google.gears.factory.create('beta.httprequest'); request.open('PUT', upl.url); request.setRequestHeader('filename', file.name); request.upload.onprogress = function(e) { // gives me % status updates... allows e.loaded/e.total }; request.onreadystatechange = function() { if (request.readyState == 4) { // completed the upload! } }; request.send(file.blob); return request; } Edit: apparently flash isn't capable of using PUT requests, so I have changed it to a "like" instead of a "must".

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >