Search Results

Search found 1248 results on 50 pages for 'jeff beck'.

Page 22/50 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Distributed filesystem across a slow link

    - by Jeff Ferland
    I have an image in my head where a link is too slow to realize the real-time transfer of files, but fast enough to catch up every day. What I'd like to see is a master <- master setup where when I write a file to Server A, the metadata will transfer to Server B immediately and the file will transfer at idle or immediately when Server B's client tries to read the file before Server A has sent it. It seems that there are many filesystems which can perform well over fast links, but I don't know of any that do well with a big bottle neck and a few hours of latency.

    Read the article

  • MySql transfer / update (a bit specific)

    - by Jeff
    before posting I was digging whole site but didn't find help for my problem, so I hope someone will help... Facts: 30 Gb mysql database on remote server (about 20.000.000 rows) data are once weekly updated in local network (mysql) I need to transfer/replace local updated database with remote connection is about 2mb (real mb, not mbps) up/down Point is that I can't have 'down time' of remote mysql server. Until now I Tried: navicat data sync - Ok, but take about 3 days to finish dbForge - ok but need 5 days to finish mysql dump transfer to remote server and execution - about day, but a lot of downtime rsync folder with database /mysql/lib/MY_DATABASE - 4 hours, but after that I need to execute always 'repir on remote server' which takes about 2 hours, and a lot of down time mysql dump piped from cl to directly goto server - still now satisfied many problems I could give you more things that I tried... mysql replication - slow Anyase, what is best,best way to: refresh remote mysql on weekly level and in same time to have 0 sec down time nor huge server load If you have any idea please share

    Read the article

  • Is data=journal on a separate device on Ext4 as good as using a RAID controller with battery backed cache for file system consistency?

    - by Jeff Strunk
    It seems to me that data=journal prevents file system inconsistency in the case of power failure. Using it with a dedicated journal device mitigates the performance penalty of writing the data twice. A power outage would still lose the data that is currently being written to the journal, but the file system on disk would always be consistent. If that amount of loss is acceptable, is a RAID controller with battery backed cache really worthwhile?

    Read the article

  • Understanding MySQL Query Caches and when to implement it?

    - by Jeff
    On our current MySQL server query cache is enabled. Qchache_hits: 31913 Qchache_inserts: 50959 Qchache_lowmem_prunes: 9320 Qchache_not_chached: 209320 Qchache_queries_in_chace: 986 com_update: 0 com_delete: 0 I do not fully understand the Query cache - I am reading about it currently and trying to understand it. Our database holds inventory data, customer data, employee data, sales data and so forth. The query is very rarely run more than once. The possibility of a query being run twice is viewing a specific sales information twice. But basically everything in our system changes constantly. It is always being updated, deleted, insterted and off the top of my head I can't picture users running the same query twice within a week. Do I even need to have the query cache enabled? I am guessing that the inserts means 51k entries have been added, but only 986 of those are being stored? Would an idea be to refresh the cache, and watch it for a week and check how many of the queries in cached are accessed maybe on a weekly basis to see if it is actually returning any benefits? Any help/guidance on this is appreciated, thanks

    Read the article

  • Requisite of file.append

    - by Jeff Strunk
    Is it possible to make salt require that a particular file was appended to as opposed to the file merely existing? It seems like I can only require a file state. The source looks like it strips out any method names from a require attribute. In the example below, I only want the foo service to run if my lines have been appended to /etc/security/limits.conf. file.append: - name: /etc/security/limits.conf - text: - root hard nofile 65535 - root soft nofile 65535 foo: service.running: - enable: True - require: - file.append: /etc/security/limits.conf

    Read the article

  • SMTP 25 blocked externally

    - by Jeff
    not sure how to title this question... we run an exchange server with around 80 internal users, all outgoing mail is relayed off a smart host (ISP smtp server) so nothing is actually sent to the world via our server. i wanted to check the server, locally i can telnet to port 25 with no issues and receive the esmtp service ready reply. whenever i do it from an external address (off our local network) i receive unable to connect error 10060. can this cause problems with SPF records, and reverse DNS ? should my exchange server be able to accept smtp requests, requiring authentication before i am able to send from external addresses? if so how... also the exchange server is behind a NAT (asa) device, more than likely thinking that the nat is not configured to route the smtp 25 request to the exchange server.. thanks

    Read the article

  • The letter 'W' gets rendered oddly

    - by Jeff E
    It seems that with this particular font (it's one of the Ubuntu fonts; I can't tell which one exactly because Chrome's debugger doesn't tell me which font-family won the CSS battle) the letter 'W' (both cases), at the default zoom level and typically with smaller sizes, will render very oddly: (open the image in a new tab) The right-hand side of the letter is totally wrong. Changing the zoom levels will mitigate (but not eliminate) the issue. Does anyone know how to fix this? It doesn't happen on Firefox or IE so I assume it's not an issue with the font itself.

    Read the article

  • URL Rewrite – Protocol (http/https) in the Action

    - by OWScott
    IIS URL Rewrite supports server variables for pretty much every part of the URL and http header. However, there is one commonly used server variable that isn’t readily available.  That’s the protocol—HTTP or HTTPS. You can easily check if a page request uses HTTP or HTTPS, but that only works in the conditions part of the rule.  There isn’t a variable available to dynamically set the protocol in the action part of the rule.  What I wish is that there would be a variable like {HTTP_PROTOCOL} which would have a value of ‘HTTP’ or ‘HTTPS’.  There is a server variable called {HTTPS}, but the values of ‘on’ and ‘off’ aren’t practical in the action.  You can also use {SERVER_PORT} or {SERVER_PORT_SECURE}, but again, they aren’t useful in the action. Let me illustrate.  The following rule will redirect traffic for http(s)://localtest.me/ to http://www.localtest.me/. <rule name="Redirect to www"> <match url="(.*)" /> <conditions> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> </conditions> <action type="Redirect" url="http://www.localtest.me/{R:1}" /> </rule> The problem is that it forces the request to HTTP even if the original request was for HTTPS. Interestingly enough, I planned to blog about this topic this week when I noticed in my twitter feed yesterday that Jeff Graves, a former colleague of mine, just wrote an excellent blog post about this very topic.  He beat me to the punch by just a couple days.  However, I figured I would still write my blog post on this topic.  While his solution is a excellent one, I personally handle this another way most of the time.  Plus, it’s a commonly asked question that isn’t documented well enough on the web yet, so having another article on the web won’t hurt. I can think of four different ways to handle this, and depending on your situation you may lean towards any of the four.  Don’t let the choices overwhelm you though.  Let’s keep it simple, Option 1 is what I use most of the time, Option 2 is what Jeff proposed and is the safest option, and Option 3 and Option 4 need only be considered if you have a more unique situation.  All four options will work for most situations. Option 1 – CACHE_URL, single rule There is a server variable that has the protocol in it; {CACHE_URL}.  This server variable contains the entire URL string (e.g. http://www.localtest.me:80/info.aspx?id=5)  All we need to do is extract the HTTP or HTTPS and we’ll be set. This tends to be my preferred way to handle this situation. Indeed, Jeff did briefly mention this in his blog post: … you could use a condition on the CACHE_URL variable and a back reference in the rewritten URL. The problem there is that you then need to match all of the conditions which could be a problem if your rule depends on a logical “or” match for conditions. Thus the problem.  If you have multiple conditions set to “Match Any” rather than “Match All” then this option won’t work.  However, I find that 95% of all rules that I write use “Match All” and therefore, being the lazy administrator that I am I like this simple solution that only requires adding a single condition to a rule.  The caveat is that if you use “Match Any” then you must consider one of the next two options. Enough with the preamble.  Here’s how it works.  Add a condition that checks for {CACHE_URL} with a pattern of “^(.+)://” like so: How you have a back-reference to the part before the ://, which is our treasured HTTP or HTTPS.  In URL Rewrite 2.0 or greater you can check the “Track capture groups across conditions”, make that condition the first condition, and you have yourself a back-reference of {C:1}. The “Redirect to www” example with support for maintaining the protocol, will become: <rule name="Redirect to www" stopProcessing="true"> <match url="(.*)" /> <conditions trackAllCaptures="true"> <add input="{CACHE_URL}" pattern="^(.+)://" /> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> </conditions> <action type="Redirect" url="{C:1}://www.localtest.me/{R:1}" /> </rule> It’s not as easy as it would be if Microsoft gave us a built-in {HTTP_PROTOCOL} variable, but it’s pretty close. I also like this option since I often create rule examples for other people and this type of rule is portable since it’s self-contained within a single rule. Option 2 – Using a Rewrite Map For a safer rule that works for both “Match Any” and “Match All” situations, you can use the Rewrite Map solution that Jeff proposed.  It’s a perfectly good solution with the only drawback being the ever so slight extra effort to set it up since you need to create a rewrite map before you create the rule.  In other words, if you choose to use this as your sole method of handling the protocol, you’ll be safe. After you create a Rewrite Map called MapProtocol, you can use “{MapProtocol:{HTTPS}}” for the protocol within any rule action.  Following is an example using a Rewrite Map. <rewrite> <rules> <rule name="Redirect to www" stopProcessing="true"> <match url="(.*)" /> <conditions trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> </conditions> <action type="Redirect" url="{MapProtocol:{HTTPS}}://www.localtest.me/{R:1}" /> </rule> </rules> <rewriteMaps> <rewriteMap name="MapProtocol"> <add key="on" value="https" /> <add key="off" value="http" /> </rewriteMap> </rewriteMaps> </rewrite> Option 3 – CACHE_URL, Multi-rule If you have many rules that will use the protocol, you can create your own server variable which can be used in subsequent rules. This option is no easier to set up than Option 2 above, but you can use it if you prefer the easier to remember syntax of {HTTP_PROTOCOL} vs. {MapProtocol:{HTTPS}}. The potential issue with this rule is that if you don’t have access to the server level (e.g. in a shared environment) then you cannot set server variables without permission. First, create a rule and place it at the top of the set of rules.  You can create this at the server, site or subfolder level.  However, if you create it at the site or subfolder level then the HTTP_PROTOCOL server variable needs to be approved at the server level.  This can be achieved in IIS Manager by navigating to URL Rewrite at the server level, clicking on “View Server Variables” from the Actions pane, and added HTTP_PROTOCOL. If you create the rule at the server level then this step is not necessary.  Following is an example of the first rule to create the HTTP_PROTOCOL and then a rule that uses it.  The Create HTTP_PROTOCOL rule only needs to be created once on the server. <rule name="Create HTTP_PROTOCOL"> <match url=".*" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{CACHE_URL}" pattern="^(.+)://" /> </conditions> <serverVariables> <set name="HTTP_PROTOCOL" value="{C:1}" /> </serverVariables> <action type="None" /> </rule>   <rule name="Redirect to www" stopProcessing="true"> <match url="(.*)" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> </conditions> <action type="Redirect" url="{HTTP_PROTOCOL}://www.localtest.me/{R:1}" /> </rule> Option 4 – Multi-rule Just to be complete I’ll include an example of how to achieve the same thing with multiple rules. I don’t see any reason to use it over the previous examples, but I’ll include an example anyway.  Note that it will only work with the “Match All” setting for the conditions. <rule name="Redirect to www - http" stopProcessing="true"> <match url="(.*)" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> <add input="{HTTPS}" pattern="off" /> </conditions> <action type="Redirect" url="http://www.localtest.me/{R:1}" /> </rule> <rule name="Redirect to www - https" stopProcessing="true"> <match url="(.*)" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> <add input="{HTTPS}" pattern="on" /> </conditions> <action type="Redirect" url="https://www.localtest.me/{R:1}" /> </rule> Conclusion Above are four working examples of methods to call the protocol (HTTP or HTTPS) from the action of a URL Rewrite rule.  You can use whichever method you most prefer.  I’ve listed them in the order that I favor them, although I could see some people preferring Option 2 as their first choice.  In any of the cases, hopefully you can use this as a reference for when you need to use the protocol in the rule’s action when writing your URL Rewrite rules. Further information: Viewing all Server Variable for a site. URL Parts available to URL Rewrite Rules Further URL Rewrite articles

    Read the article

  • How do I test UrlHelper.RouteUrl()?

    - by Jeff Putz
    I'm having a tough go trying to figure out what I need to mock in my tests to show that UrlHelper.RouteUrl() is returning the right URL. It works, but I'd like to have the right test coverage. The meat of the controller method looks like this: var urlHelper = new UrlHelper(ControllerContext.RequestContext); return Json(new BasicJsonMessage { Result = true, Redirect = urlHelper.RouteUrl(new { controller = "TheController", action = "TheAction", id = somerecordnumber }) }); Testing the result object is easy enough, like this: var controller = new MyController(); var result = controller.DoTheNewHotness()); Assert.IsInstanceOf<JsonResult>(result); var data = (BasicJsonMessage)result.Data; Assert.IsTrue(data.Result); result.Redirect is always null because the controller obviously doesn't know anything about the routing. What do I have to do to the controller to let it know? As I said, I know it works when I exercise the production code, but I'd like some testing assurance. Thanks for your help!

    Read the article

  • How to create SQL Server Express DB from SQL Server DB

    - by jeff
    I have a SQL Server 2008 DB. I want to extract SOME tables (and associated schema, constraints, indexes, etc) and create a SQL Server Express DB. It isn't a sync of the target, we stomp on it. We ONLY need to do this in the file system (not across the wire). We are not fond of the synchronization stuff and at this point don't know how to run SSIS. We are a C# shop and a little code is ok. Like using the C# bulk import stuff, but that won't create the schema. Suggestions?

    Read the article

  • Serialize C# dynamic object to JSON object to be consumed by javascript

    - by Jeff Jin
    Based on the example c# dynamic with XML, I modified DynamicXml.cs and parsed my xml string. the modified part is as follows public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; if (binder.Name == "Controls") result = new DynamicXml(_elements.Elements()); else if (binder.Name == "Count") result = _elements.Count; else { var attr = _elements[0].Attribute( XName.Get(binder.Name)); if (attr != null) result = attr.Value; else { var items = _elements.Descendants( XName.Get(binder.Name)); if (items == null || items.Count() == 0) return false; result = new DynamicXml(items); } } return true; } The xml string to parse: "< View runat='server' Name='Doc111'>" + "< Caption Name='Document.ConvertToPdf' Value='Allow Conversion to PDF'></ Caption>" + "< Field For='Document.ConvertToPdf' ReadOnly='False' DisplayAs='checkbox' EditAs='checkbox'></ Field>" + "< Field For='Document.Abstract' ReadOnly='False' DisplayAs='label' EditAs='textinput'></ Field>" + "< Field For='Document.FileName' ReadOnly='False' DisplayAs='label' EditAs='textinput'></ Field>" + "< Field For='Document.KeyWords' ReadOnly='False' DisplayAs='label' EditAs='textinput'></ Field>" + "< FormButtons SaveCaption='Save' CancelCaption='Cancel'></ FormButtons>" + "</ View>"; dynamic form = new DynamicXml(markup_fieldsOnly); is there a way to serialize the content of this dynamic object(name value pairs inside dynamic) form as JSON object and sent to client side(browser)?

    Read the article

  • Installing into the GAC with WiX 3.0

    - by Jeff Yates
    I have a DLL that I would like to install into the Global Assembly Cache so that it can be referenced from multiple locations. I have a File declaration with the Assembly attribute set to ".net" but when the installation tries to install the DLL into the GAC, I get the following error (I have tided it up a bit to make it more readable): MSI (s) (58:38) [19:14:31:031]: Product: MyProductName 1.01 -- Error 1935. An error occurred during the installation of assembly  'Compass,   version="1.0.0.0",   culture="neutral",   publicKeyToken="392B26B760D48103",   processorArchitecture="MSIL"'. Please refer to Help and Support for more information. HRESULT: 0x80131043. assembly interface:       IAssemblyCacheItem, function:             Commit, component: {53AEE63B-F356-4D4F-8D61-EB0640A6E160} I have hunted around to find out what this means and the error relates to FUSION_E_UNEXPECTED_MODULE_FOUND. This link also includes this information: /// When installing multi-file assemblies into the GAC, the hash of each module is /// checked against the hash of that file stored in the manifest. If the /// hash of one of the files in the multi-file assembly does not match what is recorded /// in the manifest, FUSION_E_UNEXPECTED_MODULE_FOUND will be returned. /// The name of the error, and the text description of it, are somewhat confusing. /// The reason this error code is described this way is that the internally, /// Fusion/CLR implements installation of assemblies in the GAC, by installing /// multiple "streams" that are individually committed. /// Each stream has its hash computed, and all the hashes found /// are compared against the hashes in the manifest, at the end of the installation. /// Hence, a file hash mismatch appears as if an "unexpected" module was found. Unfortunately, this doesn't make much sense to me and I don't see how it relates to my assembly, which isn't fancy or complex from my perspective (it's just a regular .NET 3.5 class library and the current installation test is occurring on my development machine, which is a valid target environment for my project - 32-bit Windows XP SP3). Can anyone shed some light on why I might be getting this error and how I might hope to fix it?

    Read the article

  • Card Shuffling in C#

    - by Jeff
    I am trying to write a code for a project that lists the contents of a deck of cards, asks how much times the person wants to shuffle the deck, and then shuffles them. It has to use a method to create two random integers using the System.Random class. These are my classes: Program.cs: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication3 { class Program { static void Main(string[] args) { Deck mydeck = new Deck(); foreach (Card c in mydeck.Cards) { Console.WriteLine(c); } Console.WriteLine("How Many Times Do You Want To Shuffle?"); } } } Deck.cs: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication3 { class Deck { Card[] cards = new Card[52]; string[] numbers = new string[] { "2", "3", "4", "5", "6", "7", "8", "9", "J", "Q", "K" }; public Deck() { int i = 0; foreach(string s in numbers) { cards[i] = new Card(Suits.Clubs, s); i++; } foreach (string s in numbers) { cards[i] = new Card(Suits.Spades, s); i++; } foreach (string s in numbers) { cards[i] = new Card(Suits.Hearts, s); i++; } foreach (string s in numbers) { cards[i] = new Card(Suits.Diamonds, s); i++; } } public Card[] Cards { get { return cards; } } } } classes.cs: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication3 { enum Suits { Hearts, Diamonds, Spades, Clubs } } Card.cs: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication3 { class Card { protected Suits suit; protected string cardvalue; public Card() { } public Card(Suits suit2, string cardvalue2) { suit = suit2; cardvalue = cardvalue2; } public override string ToString() { return string.Format("{0} of {1}", cardvalue, suit); } } } Please tell me how to make the cards shuffle as much as the person wants and then list the shuffled cards. Sorry about the formatting im new to this site.

    Read the article

  • Cant insert row with auto-increment key via FluentNhibernate

    - by Jeff Shattock
    I'm getting started with Fluent NHibernate, and NHibernate in general. I'm trying to do something that I feel is pretty basic, but I cant quite get it to work. I'm trying to add a new entry to a simple table. Here's the Entity class. public class Product { public Product() { id = 0; } public virtual int id {get; set;} public virtual string description { get; set; } } Here's its mapping. public class ProductMap : ClassMap<Product> { public ProductMap() { Id(p => p.id).GeneratedBy.Identity().UnsavedValue(0); Map(p => p.description); } } I've tried that with and without the additional calls after Id(). And the insert code: var p = new Product() { description = "Apples" }; using (var s = _sf.CreateSession()) { s.Save(new_product); s.Flush(); } where _sf is a properly configured SessionSource. When I execute this code, I get: NHibernate.AssertionFailure : null identifier, which makes sense based on the SQL that NHibernate is executing: INSERT INTO "Product" (description) VALUES (@p0);@p0 = 'Apples' It doesnt seem to be trying to set the Id field, which seems ok (on its face) since the DB should generate that. But its not, I think. The DB schema is autogenerated by FNH: var config = Fluently.Configure().Database(MsSqlCeConfiguration.Standard.ShowSql().ConnectionString(@"Data Source=Database1.sdf")); var SessionSource = new SessionSource(config.BuildConfiguration().Properties, new ModelMappings()); var Session = SessionSource.CreateSession(); SessionSource.BuildSchema(Session); CreateInitialData(Session); Session.Flush(); Session.Clear(); I'm sure to be doing tons of things wrong, but whats the one thats causing this error?

    Read the article

  • Microsoft.Win32.TaskScheduler NOT FOUND - I want to import this namespace in VB.NET

    - by Jeff
    I have been looking at sample online code for interfacing with the Windows Task Scheduler, and most of them import the namespace: Microsoft.Win32.TaskScheduler When I go to import it, it's not there within Win32. Does anyone know why I can't import it? I'm assuming something isn't registered correctly on my machine, but I can't fiugre out how to fix it. Just for the record, I can start the Scheduled Task component under Accessories. I've using VS 2008 (VB.Net) with Windows XP professional. Thanks.

    Read the article

  • How to truncate milliseconds off of a .NET DateTime

    - by Jeff Putz
    I'm trying to compare a time stamp from an incoming request to a database stored value. SQL Server of course keeps some precision of milliseconds on the time, and when read into a .NET DateTime, it includes those milliseconds. The incoming request to the system, however, does not offer that precision, so I need to simply drop the milliseconds. I feel like I'm missing something obvious, but I haven't found an elegant way to do it (C#).

    Read the article

  • Npgsql pass parameters by name to a stored function

    - by Jeff
    I'm working with code I'm converting to Pgsql working with .NET. I want to call a stored function that has several parameters, but I'd like to bind the parameters by name, like so: NpgsqlCommand command = new NpgsqlCommand("\"StoredFunction\"", _Connection) command.CommandType = CommandType.StoredProcedure; command.Parameters.Add("param2", value2); command.PArameters.Add("param1", value1); Attempts to do this so far look for a function with parameter types matching in the order in which I added the parameters to the collection, not by name. Is it possible for Npgsql to bind parameters to stored functions by name?

    Read the article

  • Using OLEDB parameters in .NET when connecting to an AS400/DB2

    - by Jeff Stock
    I have been pulling my hair out trying to figure out what I can't get parameters to work in my query. I have the code written in VB.NET trying to do a query to an AS/400. I have IBM Access for Windows installed and I am able to get queries to work, just not with parameters. Any time I include a parameter in my query (ex. @MyParm) it doesn't work. It's like it doesn't replace the parameter with the value it should be. Here's my code: I get the following error: SQL0206: Column @MyParm not in specified tables Here's my code: Dim da As New OleDbDataAdapter Dim dt As New DataTable da.SelectCommand = New OleDbCommand da.SelectCommand.Connection = con da.SelectCommand.CommandText = "SELECT * FROM MyTable WHERE Col1 = @MyParm" With da.SelectCommand.Parameters .Add("@MyParm", OleDbType.Integer, 9) .Item("@MyParm").Value = 5 End With ' I get the error here of course da.Fill(dt) I can replace @MyParm with a literal of 5 and it works fine. What am I missing here? I do this with SQL Server all the time, but this is the first time I am attempting it on an AS400.

    Read the article

  • Copying a directory off a network drive using Objective-C/C

    - by Jeff
    Hi All, I'm trying to figure out a way for this to work: NSString *searchCommand = [[NSString alloc] initWithFormat:@"cp -R /Volumes/Public/DirectoryToCopy* Desktop/"]; const char* system_call = [searchCommand UTF8String]; system(system_call); The system() method does not acknowledge the specific string I am passing in. If I try something like: system("kextstat"); No problems. Any ideas why the find string command I'm passing in is not working? All I get is "GDB: Running ....." in xCode. I should mention that if I try the same exact command in terminal it works just fine.

    Read the article

  • Getting MSDeploy working on our build/integration server - Is an MSBuild upgrade necessary?

    - by Jeff D
    We have what I think is a fairly standard build process: 1. Developer: Check in code 2. Build: Polls repo, sees change, and kicks off build that: 3. Build: Updates from repo, Builds w/ MSBuild, Runs unit tests w/ nunit, 4. Build: creates installer package Our security team allows us to pull from the build server, but does not allow the build server to push. So we generally rdp in, d/l the installers, and run them, which rules out the slick deployment services, so I would need to generate packages instead. I'd like to use MSDeploy, except that we have the following issues: We're on .net 3.5, and the MSBuild target (Package) that uses MSDeploy requires 4.0. Is there anything I'd need to install other than .net 4.0 RC for this? (Would MSBuild be part of that upgrade?) When I generate packages with MSDeploy, I see that I don't have just 1 file. There's a zip, deploy.cmd, SourceManifest.xml, and SetParameters.xml. What are all the other files for, and why wouldn't they all be in the 'package'? It sounds as if you can create packages by telling the system to look at a working IIS site. But if the packages are build from a CI environment, aren't you basically out of luck here? It feels like they designed some of this for small-scale developers deploying from their dev environment. That's a fine use case, but I'm interested in see what everyone's enterprise-experience is with the tool Any suggestions?

    Read the article

  • Constant opacity with glBlendFunc on iPhone

    - by Jeff Johnson
    What glBlendFunc should I use to ensure that the opacity of my drawing is always the same? When I use glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) and multiple images are drawn on top of each other, the result is more and more opaque until it's completely opaque after a certain number of imgaes. The closest I have come is to use glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_ONE_MINUS_SRC_ALPHA) which maintains a constant opacity no matter how many images are on top of each other, although there is a slight variation in opacity if the images overlap each other. Any other render states I should consider trying? Any other ideas? I am making a drawing app for my kid and I don't want the images (brush) they draw to cover up the background. Heres the closest I've got: I want to have it so that the overlap part of the circles is the same color and opacity as the center part of the circle. I am using cocos2d iphone v. 0.99

    Read the article

  • How to test controllers with CodeIgniter PART 2?

    - by Jeff
    I am having difficulties testing Controllers in Codeigniter: I use Toast but when I invoke my Home Controller class I get an exception that "db" is not defined. Has anybody an idea how to test this 1-1? Thanks class Home_tests extends Toast { function __construct() { parent::__construct(__FILE__); // Load any models, libraries etc. you need here } function test_select_user() { $controller = new Home(); $controller->getDbUser('[email protected]','password'); assert($query->num_rows() == 0 ); } }

    Read the article

  • Visual Studio 2010 web.config transformations (TransformWebConfig target)

    - by Jeff D
    I am trying to write unit tests for my transformations, so I am running: msbuild migrated-project.csproj /p:Configuration=Release /T:TransformWebConfig. This works for a new project I create in VS2010, but doesn't in this project. I'm assuming it's because it was originally a 2008 project. I know this is supposed to run in a webplatformbuild whatever, but what I'm trying to do, is just run the transform, so I can grab the transformed web.config, and run some unit tests to make sure the right values exist. I don't see TransformWebConfig referenced as a target in either project, so I'm not sure what I'm looking for.

    Read the article

  • Frame Buster Buster ... buster code needed

    - by Jeff Atwood
    Let's say you don't want other sites to "frame" your site in an <iframe>: <iframe src="http://yourwebsite.com"></iframe> So you insert anti-framing, frame busting JavaScript into all your pages: /* break us out of any containing iframes */ if (top != self) { top.location.replace(self.location.href); } Excellent! Now you "bust" or break out of any containing iframe automatically. Except for one small problem. As it turns out, your frame-busting code can be busted, as shown here: <script type="text/javascript"> var prevent_bust = 0 window.onbeforeunload = function() { prevent_bust++ } setInterval(function() { if (prevent_bust > 0) { prevent_bust -= 2 window.top.location = 'http://server-which-responds-with-204.com' } }, 1) </script> This code does the following: increments a counter every time the browser attempts to navigate away from the current page, via the window.onbeforeonload event handler sets up a timer that fires every millisecond via setInterval(), and if it sees the counter incremented, changes the current location to a server of the attacker's control that server serves up a page with HTTP status code 204, which does not cause the browser to nagivate anywhere My question is -- and this is more of a JavaScript puzzle than an actual problem -- how can you defeat the frame-busting buster? I had a few thoughts, but nothing worked in my testing: attempting to clear the onbeforeunload event via onbeforeonload = null had no effect adding an alert() stopped the process let the user know it was happening, but did not interfere with the code in any way; clicking OK lets the busting continue as normal I can't think of any way to clear the setInterval() timer I'm not much of a JavaScript programmer, so here's my challenge to you: hey buster, can you bust the frame-busting buster?

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >