Search Results

Search found 20869 results on 835 pages for 'things i hate'.

Page 669/835 | < Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >

  • Custom Alignment and Backgrounds Through Greasemonkey

    - by Jivec
    I'm trying to implement something in greasemonkey and it is giving me a fair bit of trouble as I can't get it to work. I frequently use Wolfram Alpha (http://wolframalpha.com) for a lot of things. They have recently updated the home page with a new style. There are settings that you can edit on this page (http://www.wolframalpha.com/homesettings.html) As you would expect when you clear cookies you loose these settings. What I would like to do is have a greasemonky script that sets the background to what ever I like (which will stay also regardless of the state of your cookies). It would also be cool if this background was displayed the whole way through Wolfram Alpha (ie when you make queries too eg. http://www.wolframalpha.com/input/?i=stack+overflow ) The other thing I'm trying to implement but I'm struggling is to force the results pages to be left aligned so that the browser window can be smaller. If anyone could help me with this it would be appreciated, I have tried to do it my self but I'm unsure how to get it to work.

    Read the article

  • Intellisense for custom config section problem with namespaces

    - by Quick Joe Smith
    I have just rolled a custom configuration section, created an accompanying schema document for Intellisense and added it to the Web.config's Schemas property as per Michael Stum's answer to another similar question. Unfortunately, and possibly due to me creating the XSD by hand with limited knowledge, the Intellisense relies on an xmlns attribute pointing to my XSD file's namespace being present in the custom config element. However, when running the project I get an Unrecognized attribute 'xmlns'. Note that attribute names are case-sensitive error. I could probably just modify my XSD file to define the xmlns attribute for that element, however I am wondering if this is just a bandaid fix to a larger problem. I must confess I don't have a very good understanding of XML namespaces so this might be an oppportunity to set me straight on a few things. Here is the attributes for my XSD file's root xs:schema element: <xs:schema id="awesomeConfig" targetNamespace="http://awesome.com/schemas" xmlns="http://awesome.com/schemas" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema"> ... </xs:schema> And on creating the element in the Web.config file, Visual Studio 2008 automatically appends: <awesomeConfig xmlns="http://awesome.com/schemas"></awesomeConfig> So have I misunderstood the meaning of the xs:schema attributes at all, or is the proper solution as simple as it seems?

    Read the article

  • Netty options for real-time distribution of small messages to a large number of clients?

    - by user439407
    I am designing a (near) real-time Netty server to distribute a large number of very small messages to a large number of clients across the internet. In internal, go as fast as you can testing, I found that I could do 10k clients no sweat, but now that we are trying to go across the internet, where the latency, bandwidth etc varies pretty wildly we are running into the dreaded outOfMemory issues, even with 2 gigs of RAM. I have tried various workarounds(setting the socket stack sizes smaller, setting high and low water marks, cancelling things that are too old), and they help a little, but they seem to only help a little bit. What would some good ways to optimize Netty for sending large #s of small messages without significant delays? Also, the bulk of the message only consists of one kind of message that I don't particularly care if it doesn't arrive. I would use UDP but because we don't control the client, thats not really a possibility. Is it possible to set a separate timeout solely for this kind of message without affecting the other messages? Any insight you could offer would be greatly appreciated.

    Read the article

  • what is the point of heterogenous arrays?

    - by aharon
    I know that more-dynamic-than-Java languages, like Python and Ruby, often allow you to place objects of mixed types in arrays, like so: ["hello", 120, ["world"]] What I don't understand is why you would ever use a feature like this. If I want to store heterogenous data in Java, I'll usually create an object for it. For example, say a User has int ID and String name. While I see that in Python/Ruby/PHP you could do something like this: [["John Smith", 000], ["Smith John", 001], ...] this seems a bit less safe/OO than creating a class User with attributes ID and name and then having your array: [<User: name="John Smith", id=000>, <User: name="Smith John", id=001>, ...] where those <User ...> things represent User objects. Is there reason to use the former over the latter in languages that support it? Or is there some bigger reason to use heterogenous arrays? N.B. I am not talking about arrays that include different objects that all implement the same interface or inherit from the same parent, e.g.: class Square extends Shape class Triangle extends Shape [new Square(), new Triangle()] because that is, to the programmer at least, still a homogenous array as you'll be doing the same thing with each shape (e.g., calling the draw() method), only the methods commonly defined between the two.

    Read the article

  • using EventToCommand & PassEventArgsToCommand :: how to get sender, or better metaphor?

    - by JoeBrockhaus
    The point of what I'm doing is that there are a lot of things that need to happen in the viewmodel, but when the view has been loaded, not on constructor. I could wire up event handlers and send messages, but that just seems kinda sloppy to me. I'm implementing a base view and base viewmodel where this logic is contained so all my views get it by default, hopefully. Perhaps I can't even get what I'm wanting: the sender. I just figured this is what RoutedEventArgs.OriginalSource was supposed to be? [Edit] In the meantime, I've hooked up an EventHandler in the xaml.cs, and sure enough, OriginalSource is null there as well. So I guess really I need to know if it's possible to get a reference to the view/sender in the Command as well? [/Edit] My implementation requires that a helper class to my viewmodels which is responsible for creating 'windows' knows of the 'host' control that all the windows get added to. i'm open to suggestions for accomplishing this outside the scope of using eventtocommand. :) (the code for Unloaded is the same) #region ViewLoadedCommand private RelayCommand<RoutedEventArgs> _viewLoadedCommand = null; /// <summary> /// Command to handle the control's Loaded event. /// </summary> public RelayCommand<RoutedEventArgs> ViewLoadedCommand { get { // lazy-instantiate the RelayCommand on first usage if (_viewLoadedCommand == null) { _viewLoadedCommand = new RelayCommand<RoutedEventArgs>( e => this.OnViewLoadedCommand(e)); } return _viewLoadedCommand; } } #endregion ViewLoadedCommand #region View EventHandlers protected virtual void OnViewLoadedCommand(RoutedEventArgs e) { EventHandler handler = ViewLoaded; if (handler != null) { handler(this, e); } } #endregion

    Read the article

  • Is this a good or bad way to use constructor chaining? (... to allow for testing).

    - by panamack
    My motivation for chaining my class constructors here is so that I have a default constructor for mainstream use by my application and a second that allows me to inject a mock and a stub. It just seems a bit ugly 'new'-ing things in the ":this(...)" call and counter-intuitive calling a parametrized constructor from a default constructor , I wondered what other people would do here? (FYI - SystemWrapper) using SystemWrapper; public class MyDirectoryWorker{ // SystemWrapper interface allows for stub of sealed .Net class. private IDirectoryInfoWrap dirInf; private FileSystemWatcher watcher; public MyDirectoryWorker() : this( new DirectoryInfoWrap(new DirectoryInfo(MyDirPath)), new FileSystemWatcher()) { } public MyDirectoryWorker(IDirectoryInfoWrap dirInf, FileSystemWatcher watcher) { this.dirInf = dirInf; if(!dirInf.Exists){ dirInf.Create(); } this.watcher = watcher; watcher.Path = dirInf.FullName; watcher.NotifyFilter = NotifyFilters.FileName; watcher.Created += new FileSystemEventHandler(watcher_Created); watcher.Deleted += new FileSystemEventHandler(watcher_Deleted); watcher.Renamed += new RenamedEventHandler(watcher_Renamed); watcher.EnableRaisingEvents = true; } public static string MyDirPath{get{return Settings.Default.MyDefaultDirPath;}} // etc... }

    Read the article

  • Constraining to parent container with MouseDragElementBehavior

    - by anonymous
    Hi all, I just had a question regarding constraining a control's drag and drop movement to its parent canvas. I tried using the ConstrainToParentBounds property on the MouseDragElementBehavior, however, when this is used the drag must be done really slowly or the movement of the control is choppy or stops altogether. So I am attempting to implement my own boundary constraints. I seem to be running into difficulty though. I am still using the MouseDragElementBehavior but am attempting to supplement it by also handling mouseleftbuttondown, mousemove, mouseleftbuttonup events. I know that these are working (haven't been overridden by the MouseDragElementBehavior) as I have tested them using other methods. I will post my current code below: private void Control_MouseMove(object sender, MouseEventArgs e) { MyControl mc = (MyControl)sender; Canvas canvas = (Canvas)mc.parent; GeneralTransform ct = canvas.TransformToVisual(Application.Current.RootVisual as UIElement; Point canvas_offset = ct.Transform(new Point(0,0)); double canvasTop = canvas_offset.Y; double canvasLeft = canvas_offset.X; GeneralTransform gt = mc.TransformToVisual(Application.Current.RootVisual as UIElement); Point offset = gt.Transform(new Point(0,0)); double controlTop = offset.Y; double controlLeft = offset.X; if(isMouseCaptured) { if(controlTop < canvasTop) { mc.Opacity = 1; //to test if conditions are being met, seems to indicate ok mc.setValue(Canvas.TopProperty, canvasTop); } if(controlLeft < canvasLeft) { mc.Opacity = 1; mc.setValue(Canvas.TopProperty, canvasTop); } } } This is what my code looks like at the moment (I realize there is nothing there for right/bottom). I've tried a bunch of different things at this point and none of them seem to give the desired result; the control's movement is still not constrained to the canvas. Any help/pointers would be greatly appreciated. Thanks!

    Read the article

  • Why does gcc generate verbose assembly code?

    - by Jared Nash
    I have a question about assembly code generated by GCC (-S option). Since, I am new to assembly language and know very little about it, the question will be very primitive. Still, I hope somebody will answer: Suppose, I have this C code: main(){ int x = 15; int y = 6; int z = x - y; return 0; } If we look at the assembly code (especially the part corresponding to int z = x - y ), we see: main: ... subl $16, %esp movl $15, -4(%ebp) movl $6, -8(%ebp) movl -8(%ebp), %eax movl -4(%ebp), %edx movl %edx, %ecx subl %eax, %ecx movl %ecx, %eax movl %eax, -12(%ebp) ... Why doesn't GCC generate something like this, which is less copying things around. main: ... movl $15, -4(%ebp) movl $6, -8(%ebp) movl -8(%ebp), %edx movl -4(%ebp), %eax subl %edx, %eax movl %eax, -12(%ebp) ... P.S. Linux zion-5 2.6.32-21-generic #32-Ubuntu SMP Fri Apr 16 08:10:02 UTC 2010 i686 GNU/Linux gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5)

    Read the article

  • Aspect-Oriented Programming in OOP world - breaking rules ?

    - by Maksim Kondratyuk
    Hi 2 all! When I worked on asp.net mvc web site project, I investigated different approaches for validation. Some of them were DataAnotation validation and Validation Block. They use attributes for setting up rules for validation. Like this: [Required] public string Name {get;set;} I was confused how this approach combines with SRP (single responsibilty principle) from OOP world. Also I don't like any business logic in business objects, I prefer "poor business objects" model, but when I decorate my business objects with validation attributes for real requirements, they become ugly (Has a lot of attributes / with localization logic and so on). Idea with attributes realy simple, but in my opinion the validation decoration should be separated from object. I'm not sure is the approach to separate validation rules to xml files or to another objects, maybe it is a solution. Another bad side of AOP - problems with unit testin such code. When I decorated some controller actions with custom attributes for example to import/export TempData between actions or initialize some required services I can't to write proper unit test for testing this actions. Do you think that attributes don't break srp or you just disregard this and think that it's simplest , is not worst way ? P.S. I read some likes articles and discussions and I just want to put things in proper order. P.P.S. sorry for my "fluent" english :=)

    Read the article

  • How do I get Bison/YACC to not recognize a command until it parses the whole string?

    - by chucknelson
    I have some bison grammar: input: /* empty */ | input command ; command: builtin | external ; builtin: CD { printf("Changing to home directory...\n"); } | CD WORD printf("Changing to directroy %s\n", $2); } ; I'm wondering how I get Bison to not accept (YYACCEPT?) something as a command until it reads ALL of the input. So I can have all these rules below that use recursion or whatever to build things up, which either results in a valid command or something that's not going to work. One simple test I'm doing with the code above is just entering "cd mydir mydir". Bison parses CD and WORD and goes "hey! this is a command, put it to the top!". Then the next token it finds is just WORD, which has no rule, and then it reports an error. I want it to read the whole line and realize CD WORD WORD is not a rule, and then report an error. I think I'm missing something obvious and would greatly appreciate any help - thanks! Also - I've tried using input command NEWLINE or something similar, but it still pushes CD WORD to the top as a command and then parses the extra WORD separately.

    Read the article

  • Complex SQL query, one to many relationship

    - by Ethan
    Hey SO, I have a query such that I need to get A specific dog All comments relating to that dog The user who posted each comment All links to images of the dog the user who posted each link I've tried a several things, and can't figure out quite how to work it. Here's what I have (condensed so you don't have to wade through it all): SELECT s.dog_id, s.name, c.comment, c.date_added AS comment_date_added, u.username AS comment_username, u.user_id AS comment_user_id, l.link AS link, l.date_added AS link_date_added, u2.username AS link_username, u2.user_id AS link_user_id FROM dogs AS d LEFT JOIN comments AS c ON c.dog_id = d.dog_id LEFT JOIN users AS u ON c.user_id = u.user_id LEFT JOIN links AS l ON l.dog_id = d.dog_id LEFT JOIN users AS u2 ON l.user_id = u2.user_id WHERE d.dog_id = '1' It's sorta close to working, but it'll only return me the first comment, and the first link all as one big array with all the info i requested. The are multiple comments and links per dog, so I need it to give me all the comments and all the links. Ideally it'd return an object with dog_id, name, comments(an array of the comments), links(an array of the links) and then comments would have a bunch of comments, date_added, username, and user_id and links would have a bunch of links with link, date_added, username and user_id. It's got to work even if there are no links or comments. I learned the basics of mySQL somewhat recently, but this is pretty far over my head. Any help would be wonderful. Thanks!

    Read the article

  • can I disable the "(Type e to repeat macro)" message in emacs?

    - by lindes
    Hi there, So, I've finally made the plunge, and have gotten to the state where I'm quite happy to have switched from vi and vim to emacs... I've been putting stuff in my .emacs file, learning how to evaluate things (not to mention becoming familiar with movement commands), etc. etc. etc. And now I have a problem with a require line in my .emacs file (a require statement*), which bombs out when I launch emacs (and generally fails to work). So, this lead me to the following situation: In the process of trying to debug the above situation, one of the steps I did was to open the file I was trying to require, and evaluate it bit by bit, using C-M-f and C-x C-e (and later just M-x eval-buffer), which all worked fine. But along the way of the section-by-section, I got tired of typing all those, and so I recorded a keyboard macro... C-x ( C-M-f C-x C-e C-x ) and then C-x e... which gave me a message in the minibuffer (I think I'm using the right name), saying (Type e to repeat macro). Which meant I could no longer see the resultant value of the evaluation of each section of code... which, while not critical in this case, I was liking having. Which leads me to the actual question: Is there a way to disable that message, and/or to cause the minibuffer to show multiple lines at once? I know about the *Messages* buffer, and that could have helped, I'm just wondering if there's a way to either disable that message, or otherwise make it coexist with other messages. Any suggestions? Thanks! lindes * - the problem at hand, which is not really my question, is that (require 'ruby-mode/ruby-mode) fails, even though emacs is definitely and successfully (per system call tracing) opening and reading the ruby-mode.el file. I presume this is because the provide line says just 'ruby-mode. I've found a solution for this, but if anyone can point me to any "best practices", I'd appreciate it.

    Read the article

  • KERN-EXEC 3 when navigating within a text box (Symbian OS Browser Control)

    - by Andrew Flanagan
    I've had nothing but grief using Symbian's browser control on S60 3rd edition FP1. We currently display pages and many things are working smoothly. However, when inputting text into an HTML text field, the user will get a KERN-EXEC 3 if they move left at the beginning of the text input area (which should "wrap" it to the end) or if they move right at the end of the text input area (which should "wrap" it to the beginning). I can't seem to trap the input in OfferKeyEventL. I get the key event, I return EKeyWasConsumed and the cursor still moves. TKeyResponse CMyAppContainer::OfferKeyEventL(const TKeyEvent& aKeyEvent, TEventCode aType) { if (iBrCtlInterface) // My browser control { TBrCtlDefs::TBrCtlElementType type = iBrCtlInterface->FocusedElementType(); if (type == TBrCtlDefs::EElementActivatedInputBox || type == TBrCtlDefs::EElementInputBox) { if (aKeyEvent.iScanCode == EStdKeyLeftArrow || aKeyEvent.iScanCode == EStdKeyRightArrow) { return EKeyWasConsumed; } } } } I would be okay with completely disabling arrow key navigation but can't seem to do this. Any ideas? Am I going about this the wrong way? Has anyone here even worked with the Browser Control library (browserengine.lib) on S60 3.1?

    Read the article

  • Appropriate SQL Server Permissions for Developers

    - by BJ Safdie
    After a couple of Google searches and a quick look at questions here, I cannot seem to find what I thought would be a cookbook answer for SQL Server permissions. As I often see in small shops, most developers here were using an admin account for SQL Server while developing. I want to set up roles and permissions that I can assign to developers so that we can get our jobs done, but also do so with the minimum permissions required. Can anyone offer advice on what SQL Server permissions to assign? Components: SQL Server 2008 SQL Server Reporting Services (SSRS) 2008 SQL Server Integration Services (SSIS) 2008 Platforms: Production Staging/QA Development/Integration We are running "Mixed Mode" security because of some legacy apps and networks, but are moving to Windows Auth. I am not sure if that really affects the role set up. I plan to set up access for Developers to Prod and Staging/QA DBs as Read-Only. However, I still want developers to retain the ability to run Profiling. We need Deployment accounts with higher privilege levels. We are currently trying to figure out exactly what privileges we need for SSIS package deployments. Within the Development Server, Developers need broad privileges. However, I am not sure that just making them all admins is really the best choice. It's hard to believe that no one has published a decent example script that sets up these kinds of roles with a good set of appropriate permissions for developers and deployers. We can probably figure this all out by locking things down and then adding permissions as we discover the need, but that will be way too big a PITA for everyone. Can anyone point me to, or provide, a good exemplar for permissions for these kinds of roles on these kinds of platforms?

    Read the article

  • Predefined column names in SQL Server pivot table

    - by Marcos Buarque
    Hi, the other day I opened a topic here in StackOverflow (stackoverflow.com/questions/4663698/how-can-i-display-a-consolidated-version-of-my-sql-server-table). At that time I needed help on how to show data on a pivot table. From the help I got here in the forum, my research led me to this page about dynamic SQL: www.sommarskog.se/dynamic_sql.html. And then it led me to this awesome SQL script by Itzik Ben-Gan that will create a stored procedure that outputs a pivot table exactly the way I want: sommarskog.se/pivot_sp.sp. Well, almost. I need one change in this stored procedure. Instead of having dynamic column names pulled from the @on_cols variable in the SPROC, I need the output table to hold generic column names in simple ASC order. Could be, for example, col1, col2, col3, col4 ... The dynamic column names are a problem for me. So I need them named by their index in the order they appear. I have tried all sorts of things changing this great SQL script, but it won't work. I did not paste the code from the author because it is too long, but the link above will get us there. Any help appreciated. Thank you very much

    Read the article

  • IHttpModule not being applied to virtual directory

    - by arthurdent510
    I have a network folder that is mapped to my iis app as a virtual directory and I'm trying to do some authentication for files that are located there with an ihttpmodule. I've verified that the ihttpmodule is firing properly for anything else in my app, just not the files located in virtual directory. Most of what I've found is that the directory can't be listed as an application (which it isn't), and everything should work. The other solution that I found was to add the the module tag to the tag, but that didn't seem to help either. Everything that I've found talks about stopping this from happening. So my question is what could be set that is causing this to not work? Is there a certain execute permission that needs to be set? Any other iis settings that could cause this? It is an mvc app, and this is how my directory structure is laid out: server/app <- my application folder server/app/content/downloads <- downloads is the virtual directory Do I have to add the virtual directory directly under my app directory? Is that part of the problem? I don't have direct control of the server my code is running on, so testing things out is a bit of a pain... so I was looking for some more thoughts before starting to send emails off to my operations people. Thanks!

    Read the article

  • Using SiteMesh with RequestDispatcher's forward()

    - by Rob Hruska
    I'm attempting to integrate SiteMesh into a legacy application using Tomcat 5 as my a container. I have a main.jsp that I'm decorating with a simple decorator. In decorators.xml, I've just got one decorator defined: <decorators defaultdir="/decorators"> <decorator name="layout-main" page="layout-main.jsp"> <pattern>/jsp/main.jsp</pattern> </decorator> </decorators> This decorator works if I manually go to http://example.com/my-webapp/jsp/main.jsp. However, there are a few places where a servlet, instead of doing a redirect to a jsp, does a forward: getServletContext().getRequestDispatcher("/jsp/main.jsp").forward(request, response); This means that the URL remains at something like http://example.com/my-webapp/servlet/MyServlet instead of the jsp file and is therefore not being decorated, I presume since it doesn't match the pattern in decorators.xml. I can't do a <pattern>/*</pattern> because there are other jsps that do not need to be decorated by layout-main.jsp. I can't do a <pattern>/servlet/MyServlet*</pattern> because MyServlet may forward to main.jsp sometimes and perhaps error.jsp at other times. Is there a way to work around this without expansive changes to how the servlets work? Since it's a legacy app I don't have as much freedom to change things, so I'm hoping for something configuration-wise that will fix this. SiteMesh's documentation really isn't that great. I've been working mostly off the example application that comes with the distribution. I really like SiteMesh, and am hoping I can get it to work in this case.

    Read the article

  • Zoom image to pixel level

    - by zaf
    For an art project, one of the things I'll be doing is zooming in on an image to a particular pixel. I've been rubbing my chin and would love some advice on how to proceed. Here are the input parameters: Screen: sw - screen width sh - screen height Image: iw - image width ih - image height Pixel: px - x position of pixel in image py - y position of pixel in image Zoom: zf - zoom factor (0.0 to 1.0) Background colour: bc - background colour to use when screen and image aspect ratios are different Outputs: The zoomed image (no anti-aliasing) The screen position/dimensions of the pixel we are zooming to. When zf is 0 the image must fit the screen with correct aspect ratio. When zf is 1 the selected pixel fits the screen with correct aspect ratio. One idea I had was to use something like povray and move the camera towards a big image texture or some library (e.g. pygame) to do the zooming. Anyone think of something more clever with simple pseudo code? To keep it more simple you can make the image and screen have the same aspect ratio. I can live with that. I'll update with more info as its required.

    Read the article

  • Multiple calls to preg_replace alters result

    - by Hurpe
    I have a bunch of files that were named in a somewhat standard format. The standard form is basically this: [integer]_word1_word2_word3_ ... _wordn where a word could really be anything, but all words are separated by an underscore. There is really only 3 things I want to do to the text: 1.) I want to modify the integer, which is always at the beginning, so that something like "200" would become $ 200.00. 2.) replace any "words" of the form "with", "With", "w/", or "W/" with "with". 3.) Replace all underscores with a space. I wrote three different preg_replace calls to do the trick. They are as follows: 1.) $filename = preg_replace("/(^[0-9]+)/","$ $1.00",$filename) 2.) $filename = preg_replace("/_([wW]|[wW]ith)_/"," with ",$filename) 3.) $filename = preg_replace("/_/"," ",$filename); Each replacement works as expected when run individually, but when all three are run, the 2nd replacement is ignored. Why would something like that occur? Thanks for the help!

    Read the article

  • Algorithm for optimally choosing actions to perform a task

    - by Jules
    There are two data types: tasks and actions. An action costs a certain time to complete, and a set of tasks this actions consists of. A task has a set of actions, and our job is to choose one of them. So: class Task { Set<Action> choices; } class Action { float time; Set<Task> dependencies; } For example the primary task could be "Get a house". The possible actions for this task: "Buy a house" or "Build a house". The action "Build a house" costs 10 hours and has the dependencies "Get bricks" and "Get cement", etcetera. The total time is the sum of all the times of the actions required to perform. We want to choose actions such that the total time is minimal. Note that the dependencies can be diamond shaped. For example "Get bricks" could require "Get a car" (to transport the bricks) and "Get cement" would also require a car. Even if you do "Get bricks" and "Get cement" you only have to count the time it takes to get a car once. Note also that the dependencies can be circular. For example "Money" - "Job" - "Car" - "Money". This is no problem for us, we simply select all of "Money", "Job" and "Car". The total time is simply the sum of the time of these 3 things. Mathematical description: Let actions be the chosen actions. valid(task) = ?action ? task.choices. (action ? actions ? ?tasks ? action.dependencies. valid(task)) time = sum {action.time | action ? actions} minimize time subject to valid(primaryTask)

    Read the article

  • Is there a modern free D?VCS that can ignore mainframe sequence numbers?

    - by Brent.Longborough
    I'm looking at migrating a large suite of IBM Assembler Language programs, from a vcs based on "filenames include version numbers", to a modern vcs which will give me, among other things, the ability to branch and merge. These files have 80-column records, the last 8 columns being an almost-meaningless sequence number. For a number of reasons which I don't really want to waste space by going into, I need the vcs to ignore (but hopefully preserve in some well-defined manner) the sequence number columns, and to diff and patch based only on the contents of the first 72 columns. Any ideas? Just to clarify "ignore but preserve": I accept it's a bit vague, as I haven't fully collected my ideas yet. It would be something along the lines of this: "When merging/patching, if one side has sequence numbers, output them; if more-than-one side has sequence numbers, use those present in file (1|2|3)" Why do I want to preserve sequence numbers? First, they really are sequence numbers. Second, I want to reintegrate this stuff back onto the mainframe, where sequence numbers can be terribly significant. (Those of you who know what "SMP/E" means will understand. Those who don't, be happy, but tremble...) I've just realised I hadn't accepted an answer. Difficult choice, but @Noldorin comes closest to where I have to go.

    Read the article

  • Using OUTPUT/INTO within instead of insert trigger invalidates 'inserted' table

    - by Dan
    I have a problem using a table with an instead of insert trigger. The table I created contains an identity column. I need to use an instead of insert trigger on this table. I also need to see the value of the newly inserted identity from within my trigger which requires the use of OUTPUT/INTO within the trigger. The problem is then that clients that perform INSERTs cannot see the inserted values. For example, I create a simple table: CREATE TABLE [MyTable]( [MyID] [int] IDENTITY(1,1) NOT NULL, [MyBit] [bit] NOT NULL, CONSTRAINT [PK_MyTable_MyID] PRIMARY KEY NONCLUSTERED ( [MyID] ASC )) Next I create a simple instead of trigger: create trigger [trMyTableInsert] on [MyTable] instead of insert as BEGIN DECLARE @InsertedRows table( MyID int, MyBit bit); INSERT INTO [MyTable] ([MyBit]) OUTPUT inserted.MyID, inserted.MyBit INTO @InsertedRows SELECT inserted.MyBit FROM inserted; -- LOGIC NOT SHOWN HERE THAT USES @InsertedRows END; Lastly, I attempt to perform an insert and retrieve the inserted values: DECLARE @tbl TABLE (myID INT) insert into MyTable (MyBit) OUTPUT inserted.MyID INTO @tbl VALUES (1) SELECT * from @tbl The issue is all I ever get back is zero. I can see the row was correctly inserted into the table. I also know that if I remove the OUTPUT/INTO from within the trigger this problem goes away. Any thoughts as to what I'm doing wrong? Or is how I want to do things not feasible? Thanks.

    Read the article

  • Managing large binary files with git

    - by pi
    Hi there. I am looking for opinions of how to handle large binary files on which my source code (web application) is dependent. We are currently discussing several alternatives: Copy the binary files by hand. Pro: Not sure. Contra: I am strongly against this, as it increases the likelihood of errors when setting up a new site/migrating the old one. Builds up another hurdle to take. Manage them all with git. Pro: Removes the possibility to 'forget' to copy a important file Contra: Bloats the repository and decreases flexibility to manage the code-base and checkouts/clones/etc will take quite a while. Separate repositories. Pro: Checking out/cloning the source code is fast as ever, and the images are properly archived in their own repository. Contra: Removes the simpleness of having the one and only git repository on the project. Surely introduces some other things I haven't thought about. What are your experiences/thoughts regarding this? Also: Does anybody have experience with multiple git repositories and managing them in one project? Update: The files are images for a program which generates PDFs with those files in it. The files will not change very often(as in years) but are very relevant to a program. The program will not work without the files. Update2: I found a really nice screencast on using git-submodule at GitCasts.

    Read the article

  • Screen capture doesn't work on MFC application in Vista

    - by David Thornley
    We've got some in-house applications built in MFC, with OpenGL drawing routines. They all use the same code to draw on the screen and either print the screen or save it to a JPEG file. Everything's been working fine in Windows XP, and I need to find a way to make them work on Vista. In three of our applications, everything works. In the remaining one, I can get the window border, title bar, menus, and task bar, but the interior never shows up. As I said, these applications use the exact same code to write to the screen and capture the window image, and the only difference I see that looks like it might be relevant is that the problem application uses the MFC multiple document interface, while the ones that work use the single document interface. Either the answer isn't on the net, or I'm worse at Googling than I thought. I asked on the MSDN forums, and the only practical suggestion I got was to use GDI+ rather than GDI, and that did nothing different. I have tried different things with every part of the code that captures and prints or save, given a pointer to the window, so apparently it's a matter of the window itself. I haven't rebuilt the offending application using SDI yet, and I really don't have any other ideas. Has anybody seen anything like this?

    Read the article

  • What is the best database structure for this scenario?

    - by Ricketts
    I have a database that is holding real estate MLS (Multiple Listing Service) data. Currently, I have a single table that holds all the listing attributes (price, address, sqft, etc.). There are several different property types (residential, commercial, rental, income, land, etc.) and each property type share a majority of the attributes, but there are a few that are unique to that property type. My question is the shared attributes are in excess of 250 fields and this seems like too many fields to have in a single table. My thought is I could break them out into an EAV (Entity-Attribute-Value) format, but I've read many bad things about that and it would make running queries a real pain as any of the 250 fields could be searched on. If I were to go that route, I'd literally have to pull all the data out of the EAV table, grouped by listing id, merge it on the application side, then run my query against the in memory object collection. This also does not seem very efficient. I am looking for some ideas or recommendations on which way to proceed. Perhaps the 250+ field table is the only way to proceed. Just as a note, I'm using SQL Server 2012, .NET 4.5 w/ Entity Framework 5, C# and data is passed to asp.net web application via WCF service. Thanks in advance.

    Read the article

< Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >