Search Results

Search found 13634 results on 546 pages for 'great turtle'.

Page 443/546 | < Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >

  • What happens to existing workspaces after upgrading to TFS 2010

    - by e-mre
    Hi, I was looking for some insight about what happens to existing workspaces and files that are already checked-out on people, after an upgrade to TFS2010. Surprisingly enough I can not find any satisfactory information on this. (I am talking about upgrading on new hardware by the way. Fresh TFS instance, upgraded databases) I've checked TFS Installation guide, I searched through the web, all I could find is upgrade scenarios for the server side. Nobody even mentions what happens to source control clients. I've created a virtual machine to test the upgrade process, The upgrade was successful and all my files and workspaces exist in the new server too. The problem is: The new TFS installation has a new instanceID. When I redirected on the clients to the new server, the client seemed unable to match files and file states in the workspace with the ones on the new server. This makes me wonder if it will be possible to keep working after the production upgrade. As I mentioned above I can not find anything on this, it would be great if anyone could point me to some paper or blog post about this. Thanks in advance...

    Read the article

  • Why does my Jabber bot only work if I'm debugging my Perl script?

    - by TheGNUGuy
    I am trying to make a jabber bot from scratch and my script is acting funny. I was originally developing the bot on a remote CentOS box, but I have switched to a local Win7 machine. Right now I'm using ActiveState Perl and I'm using Eclipse with the Perl plugin to run a debug the script. The funny behavior I'm experiencing occurs when I run or debug the script. If I run the script using the debugger it works fine, meaning I can send messages to the bot and it can send messages to me. However when I just execute the script normally the bot sends the successful connection message then it disconnects from my jabber server and the script ends. I'm a novice when it comes to Perl and I can't figure out what I'm doing wrong. My guess is it has something to do with the subroutines and sending the presence of the bot. (I know for sure that it has something to do with sending the bot's presence because if the presence code is removed, the script behaves as expected except the bot doesn't appear to be online.) If anyone can help me with this that would be great. I originally had everything in 1 file but separated them into several trying to figure out my problem here are the pastebin links to my source code. jabberBot.pl: http://pastebin.com/cVifv0mm chatRoutine.pm: http://pastebin.com/JXmMT7av trimSpaces.pm: http://pastebin.com/SkeuWtu1 Thanks again for any help!

    Read the article

  • NES Programming - Nametables?

    - by Jeffrey Kern
    Hello everyone, I'm wondering about how the NES displays its graphical muscle. I've researched stuff online and read through it, but I'm wondering about one last thing: Nametables. Basically, from what I've read, each 8x8 block in a NES nametable points to a location in the pattern table, which holds graphic memory. In addition, the nametable also has an attribute table which sets a certain color palette for each 16x16 block. They're linked up together like this: (assuming 16 8x8 blocks) Nametable, with A B C D = pointers to sprite data: ABBB CDCC DDDD DDDD Attribute table, with 1 2 3 = pointers to color palette data, with < referencing value to the left, ^ above, and ' to the left and above: 1<2< ^'^' 3<3< ^'^' So, in the example above, the blocks would be colored as so 1A 1B 2B 2B 1C 1D 2C 2C 3D 3D 3D 3D 3D 3D 3D 3D Now, if I have this on a fixed screen - it works great! Because the NES resolution is 256x240 pixels. Now, how do these tables get adjusted for scrolling? Because Nametable 0 can scroll into Nametable 1, and if you keep scrolling Nametable 0 will wrap around again. That I get. But what I don't get is how to scroll the attribute table wraps around as well. From what I've read online, the 16x16 blocks it assigns attributes for will cause color distortions on the edge tiles of the screen (as seen when you scroll left to right and vice-versa in SMB3). The concern I have is that I understand how to scroll the nametables, but how do you scroll the attribute table? For intsance, if I have a green block on the left side of the screen, moving the screen to right should in theory cause the tiles to the right to be green as well until they move more into frame, to which they'll revert to their normal colors.

    Read the article

  • How can you transform a set of numbers into mostly whole ones?

    - by Alice
    Small amount of background: I am working on a converter that bridges between a map maker (Tiled) that outputs in XML, and an engine (Angel2D) that inputs lua tables. Most of this is straight forward However, Tiled outputs in pixel offsets (integers of absolute values), while Angel2D inputs OpenGL units (floats of relative values); a conversion factor between these two is needed (for example, 32px = 1gu). Since OpenGL units are abstract, and the camera can zoom in or out if the objects are too small or big, the actual conversion factor isn't important; I could use a random number, and the user would merely have to zoom in or out. But it would be best if the conversion factor was selected such that most numbers outputted were small and whole (or fractions of small whole numbers), because that makes it easier to work with (and the whole point of the OpenGL units is that they are easy to work with). How would I find such a conversion factor reliably? My first attempt was to use the smallest number given; this resulted in no fractions below 1, but often lead to lots of decimal places where the factors didn't line up. Then I tried the mode of the sequence, which lead to the largest number of 1's possible, but often lead to very long floats for background images. My current approach gets the GCD of the whole sequence, which, when it works, works great, but can easily be thrown off course by a single bad apple. Note that while I could easily just pass the numbers I am given along, or pick some fixed factor, or use one of the conversions I specified above, I am looking for a method to reliably scale this list of integers to small, whole numbers or simple fractions, because this would most likely be unsurprising to the end user; this is not a one off conversion. The end users tend to use 1.0 as their "base" for manipulations (because it's simple and obvious), so it would make more sense for the sizes of entities to cluster around this.

    Read the article

  • How can I "pack()" a printable Java Swing component?

    - by Jonas
    I have implemented a Java Swing component that implements Printable. If I add the component to a JFrame, and do this.pack(); on the JFrame, it prints perfect. But if I don't add the component to a JFrame, just a blank page is printed. This code gives a great printout: final PrintablePanel p = new PrintablePanel(pageFormat); new JFrame() {{ getContentPane().add(p); this.pack(); }}; job.setPrintable(p, pageFormat); try { job.print(); } catch (PrinterException ex) { System.out.println("Fail"); } But this code gives a blank page: final PrintablePanel p = new PrintablePanel(pageFormat); // new JFrame() {{ getContentPane().add(p); this.pack(); }}; job.setPrintable(p, pageFormat); try { job.print(); } catch (PrinterException ex) { System.out.println("Fail"); } I think that this.pack(); is the big difference. How can I do pack() on my printable component so it prints fine, without adding it to a JFrame? The panel is using several LayoutManagers. I have tried with p.validate(); and p.revalidate(); but it's not working. Any suggestions? Or do I have to add it to a hidden JFrame before I print the component?

    Read the article

  • Compiling a click-once app that requires administrator?

    - by Assimilater
    Hi, a lot of my programs require the ability to write files to the hard drive. When I first made these programs for XP they worked great. Now I'm less ignorant about UAC (got a new laptop recently). And for future customers...I've noticed the potential for a LOT of annoying error messages....and quite frankly if the program can't write data to the hard drive or thumb drive it's on...there's no point to running it.... I've tried multiple times to build in the manifest a requirement for administrator or user access....I'm not sure if anything less would solve the problem...but have failed because click-once has security features in place to prevent me from doing so. I'd rather not have to tell my customers how to make the program run as an administrator by editing the file's properties...I'd much rather have a convenient pop up like what you'd see new programs such as Itunes or Filezilla show if they were in conflict with UAC requesting the privileges they need. I'd really like to do this but have had little success. Any and all advice that can remedy this grievous problem appreciated. Thanks.

    Read the article

  • Failure remediation strategy for File I/O

    - by Brett
    I'm doing buffered IO into a file, both read and write. I'm using fopen(), fseeko(), standard ANSI C file I/O functions. In all cases, I'm writing to a standard local file on a disk. How often do these file I/O operations fail, and what should the strategy be for failures? I'm not exactly looking for stats, but I'm looking for a general purpose statement on how far I should go to handle error conditions. For instance, I think everyone recognizes that malloc() could and probably will fail someday on some user's machine and the developer should check for a NULL being returned, but there is no great remediation strategy since it probably means the system is out of memory. At least, this seems to be the approach taken with malloc() on desktop systems, embedded systems are different. Likewise, is it worth reattempting a file I/O operation, or should I just consider a failure to be basically unrecoverable, etc. I would appreciate some code samples demonstrating proper usage, or a library guide reference that indicates how this is to be handled. Any other data is, of course, welcome.

    Read the article

  • Passing a parameter in a Report's Open Event to a parameter query (Access 2007)

    - by JPM
    Hi there, I would like to know if there is a way to set the parameters in an Access 2007 query using VBA. I am new to using VBA in Access, and I have been tasked with adding a little piece of functionality to an existing app. The issue I am having is that the same report can be called in two different places in the application. The first being on a command button on a data entry form, the other from a switchboard button. The report itself is based on a parameter query that has requires the user to enter a Supplier ID. The user would like to not have to enter the Supplier ID on the data entry form (since the form displays the Supplier ID already), but from the switchboard, they would like to be prompted to enter a Supplier ID. Where I am stuck is how to call the report's query (in the report's open event) and pass the SupplierID from the form as the parameter. I have been trying for a while, and I can't get anything to work correctly. Here is my code so far, but I am obviously stumped. Private Sub Report_Open(Cancel As Integer) Dim intSupplierCode As Integer 'Check to see if the data entry form is open If CurrentProject.AllForms("frmExample").IsLoaded = True Then 'Retrieve the SupplierID from the data entry form intSupplierCode = Forms![frmExample]![SupplierID] 'Call the parameter query passing the SupplierID???? DoCmd.OpenQuery "qryParams" Else 'Execute the parameter query as normal DoCmd.OpenQuery "qryParams"????? End If End Sub I've tried Me.SupplierID = intSupplierCode, and although it compiles, it bombs when I run it. And here is my SQL code for the parameter query: PARAMETERS [Enter Supplier] Long; SELECT Suppliers.SupplierID, Suppliers.CompanyName, Suppliers.ContactName, Suppliers.ContactTitle FROM Suppliers WHERE (((Suppliers.SupplierID)=[Enter Supplier])); I know there are ways around this problem (and probably an easy way as well) but like I said, my lack of experience using Access and VBA makes things difficult. If any of you could help, that would be great!

    Read the article

  • Fluent NHibernate Repository with subclasses

    - by reallyJim
    Having some difficulty understanding the best way to implement subclasses with a generic repository using Fluent NHibernate. I have a base class and two subclasses, say: public abstract class Person { public virtual int PersonId { get; set; } public virtual string FirstName { get; set; } public virtual string LastName { get; set; } } public class Student : Person { public virtual decimal GPA { get; set; } } public class Teacher : Person { public virtual decimal Salary { get; set; } } My Mappings are as follows: public class PersonMap : ClassMap { public PersonMap() { Table("Persons"); Id(x => x.PersonId).GeneratedBy.Identity(); Map(x => x.FirstName); Map(x => x.LastName); } } public class StudentMap : SubclassMap<Student> { public StudentMap() { Table("Students"); KeyColumn("PersonId"); Map(x => x.GPA); } } public class TeacherMap : SubclassMap<Teacher> { public TeacherMap() { Table("Teachers"); KeyColumn("PersonId"); Map(x => x.Salary); } } I use a generic repository to save/retreive/update the entities, and it works great--provided I'm working with Repository--where I already know that I'm working with students or working with teachers. The problem I run into is this: What happens when I have an ID, and need to determine the TYPE of person? If a user comes to my site as PersonId = 23, how do I go about figuring out which type of person it is?

    Read the article

  • What is wrong with accessing DBI directly?

    - by canavanin
    Hi everyone! I'm currently reading Effective Perl Programming (2nd edition). I have come across a piece of code which was described as being poorly written, but I don't yet understand what's so bad about it, or how it should be improved. It would be great if someone could explain the matter to me. Here's the code in question: sub sum_values_per_key { my ( $class, $dsn, $user, $password, $parameters ) = @_; my %results; my $dbh = DBI->connect( $dsn, $user, $password, $parameters ); my $sth = $dbh->prepare( 'select key, calculate(value) from my_table'); $sth->execute(); # ... fill %results ... $sth->finish(); $dbh->disconnect(); return \%results; } The example comes from the chapter on testing your code (p. 324/325). The sentence that has left me wondering about how to improve the code is the following: Since the code was poorly written and accesses DBI directly, you'll have to create a fake DBI object to stand in for the real thing. I have probably not understood a lot of what the book has so far been trying to teach me, or I have skipped the section relevant for understanding what's bad practice about the above code... Well, thanks in advance for your help!

    Read the article

  • Alternatives to LINQ To SQL on high loaded pages

    - by Alex
    To begin with, I LOVE LINQ TO SQL. It's so much easier to use than direct querying. But, there's one great problem: it doesn't work well on high loaded requests. I have some actions in my ASP.NET MVC project, that are called hundreds times every minute. I used to have LINQ to SQL there, but since the amount of requests is gigantic, LINQ TO SQL almost always returned "Row not found or changed" or "X of X updates failed". And it's understandable. For instance, I have to increase some value by one with every request. var stat = DB.Stats.First(); stat.Visits++; // .... DB.SubmitChanges(); But while ASP.NET was working on those //... instructions, the stats.Visits value stored in the table got changed. I found a solution, I created a stored procedure UPDATE Stats SET Visits=Visits+1 It works well. Unfortunately now I'm getting more and more moments like that. And it sucks to create stored procedures for all cases. So my question is, how to solve this problem? Are there any alternatives that can work here? I hear that Stackoverflow works with LINQ to SQL. And it's more loaded than my site.

    Read the article

  • Eclipse PDT "tips" ?

    - by Pascal MARTIN
    Hi ! (Yes, this is a quite opened and general and subjective question -- it's by design, cause I want tips you think are great !) I'm using Eclipse PDT 2.1 to work in PHP, either for small and/or big projects -- I've been doing so for quite some times, now, actually (since before 1.0 stable, if I remember well)... I was wondering if any of you did know "tips" to be more efficient. Let met explain more in details : I know about things like plugins like Aptana (better editor for JS/CSS), Subversive (for SVN access), RSE, Filesync, integrating Xdebug's debugger, ... What I mean by "tips" is more some little things you discovered one day and since use all the time -- and allow you to be more efficient in your PHP projects. Some examples of "tips" that come to my mind, and that already know and use : ctrl+space to open the list of suggestions for functions / variables names ctrl+shift+R (navigate > open resource) to open a popup which show only files which names contain what you type ; ie, quick opening of files this one might be the perfect example : I know this one is not often known by coworkers and they find it as useful as I do ; so, I guess there might be lots of other things like this one I don't know myself ^^ ctrl+M to switch to full-screen view for the editor (instead of double-click on tabs bar) shift+F2 while on a function name, to open it's page if the PHP manual in a browser Attention Mac Users use Command instead Control. I guess you get the point ; but I'm really open to any suggestion (be it eclipse-related in general, of more PHP/PDT-specific) that can help be be more efficient :-) Anyway, thanks in advance for your help !

    Read the article

  • Request-local storage in ASP.NET (accessible to the code from IHttpModule implementation)

    - by IgorK
    I need to have some object hanging around between two events I'm interested in: PreRequestHandlerExecute (where I create an instance of my object and want to save it) and PostRequestHandlerExecute (where I want to get to the object). After the second event the object is not needed for my purposes and should be discarded either by storage or my explicit action. So the ideal context where my object should be stored is per request (with guaranteed no sharing issues when different threads are serving requests... or processes/servers :) ) Take into account that actual implementation I can do is being made from a HttpModule and is supposed to be a pluggable solution for already written web apps (so the option to provide some state using static/instance variables in Global.asax doesn't look good - I will have to modify Global.asax on every web application). Cache seems to be too broad for this use. I tried to see whether httpContext.Application (of type HttpApplicationState) is good for me or not, but cannot get whether it is exactly per HttpApplication instance or not (AFAIK you can have several instances of HttpApplications used on different threads and therefore serving several requests simultaneously - then using storage shared between threads will not work correctly; otherwise I would use it because one HttpApplication instance serves exactly one request at a time). Something could be done with storing state on the HttpModule instances if I know for sure that it's exactly bound 1-to-1 with every HttpApplication instance running (but again I need a proof that HttpApplication instance is 1-to-1 with my HttpModule's instance). Any valuable and reputable links on these topics are much appreciated... Would be great to find something particularly well-suited for per request situation (because otherwise I may end up with something ulgy... probably either some 'broader' scoped storage and some hacks to have different keys in the storage for different requests, OR using a thread-local thing and in this way commit to the theory that IIS/ASP.NET will not ever serve first event from one thread and the second event from the other thread and so on)

    Read the article

  • Combine Search Bar and URL Bar into One (WebView)

    - by Jay Bush
    So I'm in the midst of updating my Web Browser app for iOS devices, from the ground up, and I'm trying to implement some more convenient features. One feature that seems to be really popular now, that I have been getting a lot of requests for, is the combination of a Google Search bar and a URL bar in one, like that of the Chrome application. Below is a screenshot of the Google Chrome app, and as you can see, they've made it so you can either enter in a search query like "apple ipad" and it will return a Google search page of 'Apple iPad', or you can enter in a URL "http://apple.com/ipad/" and it will load that URL. I have looked all over the internet, but all I could find were tutorials on how to Search Google with value of the UITextField. I have a feeling that the best way to do this is to probably make a 'check'. Like if the entered value contains 'http://' 'www.' '.com' or no spaces, then load it as a URL, if not then load it in a Google Search page, and then have the webview load up the Google Search page. If anybody could show me to the right direction, that would be great, or even supplying me with some code would be even greater. :) Thanks! If anyone needs part of the code, just ask.

    Read the article

  • PHP Socket Server vs node.js: Web Chat

    - by Eliasdx
    I want to program a HTTP WebChat using long-held HTTP requests (Comet), ajax and websockets (depending on the browser used). Userdatabase is in mysql. Chat is written in PHP except maybe the chat stream itself which could also be written in javascript (node.js): I don't want to start a php process per user as there is no good way to send the chat messages between these php childs. So I thought about writing an own socket server in either PHP or node.js which should be able to handle more then 1000 connections (chat users). As a purely web developer (php) I'm not much familiar with sockets as I usually let web server care about connections. The chat messages won't be saved on disk nor in mysql but in RAM as an array or object for best speed. As far as I know there is no way to handle multiple connections at the same time in a single php process (socket server), however you can accept a great amount of socket connections and process them successive in a loop (read and write; incoming message - write to all socket connections). The problem is that there will most-likely be a lag with ~1000 users and mysql operations could slow the whole thing down which will then affect all users. My question is: Can node.js handle a socket server with better performance? Node.js is event-based but I'm not sure if it can process multiple events at the same time (wouldn't that need multi-threading?) or if there is just an event queue. With an event queue it would be just like php: process user after user. I could also spawn a php process per chat room (much less users) but afaik there are singlethreaded IRC servers which are also capable to handle thousands of users. (written in c++ or whatever) so maybe it's also possible in php. I would prefer PHP over Node.js because then the project would be php-only and not a mixture of programming languages. However if Node can process connections simultaneously I'd probably choose it.

    Read the article

  • VS 2008 C++ build output?

    - by STingRaySC
    Why when I watch the build output from a VC++ project in VS do I see: 1Compiling... 1a.cpp 1b.cpp 1c.cpp 1d.cpp 1e.cpp [etc...] 1Generating code... 1x.cpp 1y.cpp [etc...] The output looks as though several compilation units are being handled before any code is generated. Is this really going on? I'm trying to improve build times, and by using pre-compiled headers, I've gotten great speedups for each ".cpp" file, but there is a relatively long pause during the "Generating Code..." message. I do not have "Whole Program Optimization" nor "Link Time Code Generation" turned on. If this is the case, then why? Why doesn't VC++ compile each ".cpp" individually (which would include the code generation phase)? If this isn't just an illusion of the output, is there cross-compilation-unit optimization potentially going on here? There don't appear to be any compiler options to control that behavior (I know about WPO and LTCG, as mentioned above). EDIT: The build log just shows the ".obj" files in the output directory, one per line. There is no indication of "Compiling..." vs. "Generating code..." steps. EDIT: I have confirmed that this behavior has nothing to do with the "maximum number of parallel project builds" setting in Tools - Options - Projects and Solutions - Build and Run. Nor is it related to the MSBuild project build output verbosity setting. Indeed if I cancel the build before the "Generating code..." step, the ".obj" files will not exist for the most recent set of "compiled" files. E.g., if I cancel the build during "c.cpp" above, I will see only "a.obj" and "b.obj".

    Read the article

  • mercurial fails with a file named ---.config - any way around this?

    - by Travis Laborde
    We are just beginning to learn and evaluate Mercurial, due to an increasing number of nightmare merges, and various other problems we've had with SVN lately. A client wants us to pull down a live copy of their site, do some SEO work on it, and push it back to them. They have no source control at all. I figure this is a great project to work on with Mercurial. Instead of putting it into our SVN and exporting when we are done, we'll use Mercurial... But right away it seems I have some problem :) They have a file called "---.config" (without quotes) which seems to cause our Mercurial to barf. It just can't commit that file. I've created the repo and committed everything else, but I just can't get this one file committed. We are running on Windows 2008 x64 with TortoiseHG 1.0. I suppose I could ignore the file since it is unlikely we'll need to work with it, but still - I'd like to learn how to use Mercurial a bit better. Is there a way around this?

    Read the article

  • VS2010 development web server does not use integrated-mode HTTP handlers/modules

    - by Domenic
    I am developing an ASP.NET MVC 2 web site, targeted for .NET Framework 4.0, using Visual Studio 2010. My web.config contains the following code: <system.webServer> <modules runAllManagedModulesForAllRequests="true"> <add name="XhtmlModule" type="DomenicDenicola.Website.XhtmlModule" /> </modules> <handlers> <add name="DotLess" type="dotless.Core.LessCssHttpHandler,dotless.Core" path="*.less" verb="*" /> </handlers> </system.webServer> When I use Build > Publish to put the web site on my local IIS7 instance, it works great. However, when I use Debug > Start Debugging, neither the HTTP handler nor module are executed on any requests. Strangely enough, when I put the handler and module <add /> tags back into <system.web /> under <httpHandlers /> and <httpModules />, they work. This seems to imply that the development web server is running in classic mode. How do I fix this?

    Read the article

  • How do i write task? (parallel code)

    - by acidzombie24
    I am impressed with intel thread building blocks. I like how i should write task and not thread code and i like how it works under the hood with my limited understanding (task are in a pool, there wont be 100 threads on 4cores, a task is not guaranteed to run because it isnt on its own thread and may be far into the pool. But it may be run with another related task so you cant do bad things like typical thread unsafe code). I wanted to know more about writing task. I like the 'Task-based Multithreading - How to Program for 100 cores' video here http://www.gdcvault.com/sponsor.php?sponsor_id=1 (currently second last link. WARNING it isnt 'great'). My fav part was 'solving the maze is better done in parallel' which is around the 48min mark (you can click the link on the left side). However i like to see more code examples and some API of how to write task. Does anyone have a good resource? I have no idea how a class or pieces of code may look after pushing it onto a pool or how weird code may look when you need to make a copy of everything and how much of everything is pushed onto a pool.

    Read the article

  • Perl - how to get the number of elements in a list (not a named array)

    - by NXT
    Hi Everyone, I'm trying to get a block of code down to one line. I need a way to get the number of items in a list. My code currently looks like this: # Include the lib directory several levels up from this directory my @ary = split('/', $Bin); my @ary = @ary[0 .. $#ary-4]; my $res = join '/',@ary; lib->import($res.'/lib'); That's great but I'd like to make that one line, something like this: lib->import( join('/', ((split('/', $Bin)) [0 .. $#ary-4])) ); But of course the syntax $#ary is meaningless in the above line. Is there equivalent way to get the number of elements in an anonymous list? Thanks! PS: The reason for consolidating this is that it will be in the header of a bunch of perl scripts that are ancillary to the main application, and I want this little incantation to be more cut & paste proof.

    Read the article

  • List of fonts installed by default in versions of Windows?

    - by Ricket
    I've been seeing more and more websites using fancy antialiased fonts. Every time I hit one, I think to myself "hmm, what web-safe font is that?" - but after looking at the CSS I typically find some font name in quotes, like "Palatino Linotype". Obviously not web-safe, but according to the Wikipedia article, "Palatino Linotype is shipped with Windows 2000 or later, and Microsoft Office Professional Edition 2003." So that covers what, 95% of users that might visit your website? And thanks to the power of CSS, the website can fallback to a similar generic font typename such as 'serif' for non-Windows users with a line like this: font: 16px/20px "Palatino Linotype", serif; Awesome! I want to start using fancy fonts! Is there a set of lists out there, of the fonts that are preinstalled by default in Windows 98, 2000, NT, ME, XP, 2003, etc., and maybe for the Mac OSX versions and various Linux distributions as well? It would be a great reference for picking web font faces! (if not, someone should compile it!) I had never before heard of Palatino Linotype and I want to know what other fonts have existed since old Windows versions that I've never known about!

    Read the article

  • FOSS ASP.Net Session Replication Solution?

    - by jsight
    I've been searching (with little success) for a free/opensource session clustering and replication solution for asp.net. I've run across the usual suspects (indexus sharedcache, memcached), however, each has some limitations. Indexus - Very immature, stubbed session interface implementation. Its otherwise a great caching solution, though. Memcached - Little replication/failover support without going to a db backend. Several SF.Net projects - All aborted in the early stages... nothing that appears to have any traction, and one which seems to have gone all commercial. Microsoft Velocity - Not OSS, but seems nice. Unfortunately, I didn't see where CTP1 supported failover, and there is no clear roadmap for this one. I fear that this one could fall off into the ether like many other MS dev projects. I am fairly used to the Java world where it is kind of taken for granted that many solutions to problems such as this will be available from the FOSS world. Are there any suitable alternatives available on the .Net world?

    Read the article

  • How to force positioned elements to stay withing viewable browser area?

    - by jessegavin
    I have a script which inserts "popup" elements into the DOM. It sets their top and left css properties relative to mouse coordinates on a click event. It works great except that the height of these "popup" elements are variable and some of them extend beyond the viewable area of the browser window. I would like to avoid this. Here's what I have so far <script type="text/javascript"> $(function () { $("area").click(function (e) { e.preventDefault(); var offset = $(this).offset(); var relativeX = e.pageX - offset.left; var relativeY = e.pageY - offset.top; // 'responseText' is the "popup" HTML fragment $.get($(this).attr("href"), function (responseText) { $(responseText).css({ top: relativeY, left: relativeX }).appendTo("#territories"); // Need to be able to determine // viewable area width and height // so that I can check if the "popup" // extends beyond. $(".popup .close").click(function () { $(this).closest(".popup").remove(); }); }); }); }); </script>

    Read the article

  • Setup SVN/LAMP/Test Server/ on linux, where to start?

    - by John Isaacks
    I have a ubuntu machine I have setup. I installed apache2 and php5 on it. I can access the web server from other machines on the network via http://linux-server. I have subversion installed on it. I also have vsftpd installed on it so I can ftp to it from another computer on the network. Myself and other users currently use dreamweaver to checkin-checkout files directly from our live site to make changes. I want the connect to the linux server from pc. make the changes on the test server until ready and then pushed to the live site. I want to use subversion also into this workflow as well. but not sure what the best workflow is or how to set this up. I have no experience with linux, svn, or even using a test server, the checkin/out we are currently doing is the way I have always done it. I have hit many snags already just getting what I have setup because of my lack of knowledge in the area. Dreamweaver 5 has integration with subversion but I can't figure out how to get it to work. I want to setup and create the best workflow possible. I dont expect anyone to be able to give me an answer that will enlighten me enough to know everthing I need to know to do what I want to do (altough if possible that would be great) instead I am looking for maybe a knowledge path like answer. Like a general outline of what I need to do accompanied with links to learn how to do it. like read this book to learn linux, then read this article to learn svn, etc., then you should know what to do. I would be happy just getting it all setup, but I would like to know what I am actually doing while setting it up too.

    Read the article

  • How to increase PermGen memory for eclipselink StaticWeaveAntTask

    - by rayd09
    We are using Eclipselink and need to weave the code in order for lazy fetching to work property. During the weave process I'm getting the following error: weave: BUILD FAILED java.lang.OutOfMemoryError: PermGen space I have the following tasks within my ant build file: <target name="define_weave_task" description="task definition for EclipseLink static weaving"> <taskdef name="eclipse_weave" classname="org.eclipse.persistence.tools.weaving.jpa.StaticWeaveAntTask"/> </target> <target name="weave" depends="compile,define_weave_task" description="weave eclipselink code into compiled classes"> <eclipse_weave source="${path.classes}" target="${path.classes}"> <classpath refid="compile.classpath"/> </eclipse_weave> </target> It has been working great for a long time. Now that the amount of code to be woven has increased I'm getting the PermGen error. I would like to be able to up the amount of perm space. If I was doing a compile I would be able to up the perm space via a compiler argument such as <compilerarg value="-XX:MaxPermSize=256M"/> but this does not appear to be a valid argument for eclipselink weaving. How can I up the perm space for the weave?

    Read the article

< Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >