Search Results

Search found 1268 results on 51 pages for 'eric girard'.

Page 39/51 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • SASS color array

    - by Eric Holmes
    I am trying to write a loop that will cycle through colors and condense the amount of code in my scss file. Here is a simple example of what I have: $color1: blue; $color2: red; $color3: white; $color4: black; .color1-bg { background-color: $color1; } .color2-bg { background-color: $color2; } .color1-border { border-color: $color1; } .color2-border { border-color: $color2; } And so on. I am looking for a way to make the equivalent of a foreach loop, and cycle through an 'array' of colours by index. Something like this: @each $color in $color1, $color2, $color3, $color4 { .{$color}-bg { background-color: $color; .{$color}-border { border-color: $color; } I know the syntax is wrong, but that is my thinking process. Thanks for the help!

    Read the article

  • how to protect an imported win32 dll into a .net application from memory issues

    - by Eric
    I have a c# application that needs to use a legacy win32 dll. The dll is almost its own app, it has dialogs, operations with hardware, etc. When this dll is imported and used, there are a couple of problems that occur: Dragging a dialog (not a windows system dialog, but one created by the dll) across the managed code app causes the UI to not repaint. Further it generates a system out of memory exception from various ui controls. The performance is incredibly slow. There seems to be no way to unload the dll so the memory never gets cleaned up. When we close our managed app, we get another memory exception. At the moment we import each method call as such: [DllImport("dllname.dll", EntryPoint = "MethodName", SetLastError = true, CharSet = CharSet.Auto, ExactSpelling = true, CallingConvention = CallingConvention.StdCall)]

    Read the article

  • How to define schema for an ActiveRecord model?

    - by Eric Stanton
    I can find how to define columns only when doing migrations. However i do not need to migrate my model. I want to work with it "virtually". Does AR read columns data only from db? Any way to define columns like in DataMapper? class Post include DataMapper::Resource property :id, Serial property :title, String property :published, Boolean end Now i can play with my model without migrations/connections.

    Read the article

  • Why are my JavaScript variables not persisting across functions?

    - by Eric Belair
    I have the following JavaScript in my HTML page referencing an HTML form on the page: <script type="text/javascript"> <!-- var myForm = document.myForm; function validateForm() { if (myForm.myInput == "") alert("Please input some text."); return false; } myForm.submit(); } function showFormInput() { myForm.reset(); document.getElementById('myInput').style.display = 'inline'; } //--> </script> ... <form name="myForm" id="myForm" action="..." method="post"> <input id="myInput" name="myInput" type="text" value="" style="display:none;" /> </form> Both functions are throwing an exception when trying to access the variable myForm, saying that "myForm is null or not an object". Why is this occurring?

    Read the article

  • How can I parse_url in PHP when there is a URL in a string variable?

    - by Eric O
    I am admittedly a PHP newbie, so I need some help. I am creating a self-designed affiliate program for my site and have the option for an affiliate to add a SubID to their link for tracking. Without having control over what is entered, I have been testing different scenarios and found a bug when a full URL is entered (i.e. "http://example.com"). In my PHP I can grab the variable from the string no problem. My problem comes from when I get the referring URL and parse it (since I need to parse the referring URL to get the host mane for other uses). Code below: $refURL = getenv("HTTP_REFERER"); $parseRefURL = parse_url($refURL); WORKS when incoming link is (for example): http://example.com/?ref=REFERRER'S-ID&sid=www.test.com ERROR when incoming link is (notice the addition of "http://" after "sid="): http://example.com/?ref=REFERRER'S-ID&sid=http://www.test.com Here is the warning message: Warning: parse_url(/?ref=REFERRER'S-ID&sid=http://www.test.com) [function.parse-url]: Unable to parse url in /home4/'directory'/public_html/hosterdoodle/header.php on line 28 Any ideas on how to keep the parse-url function from being thrown off when someone may decide to place a URL in a variable? (I actually tested this problem down to the point that it will throw the error with as little as ":/" in the variable)

    Read the article

  • .click() callback references local variable from the calling method instead of copying by value

    - by Eric Freese
    The following jQuery Javascript code is included on an otherwise empty page. $(function() { for (var i = 0; i < 10; i++) { element = $('<div>' + i + '</div>'); element.click(function() { alert(i); }); $('body').append(element); } }); The desired behavior is that this code should generate 10 div elements numbered from 0 to 9. When you click on a div element, an alert popup will show the number of the div element you clicked on (i.e. if a user clicks on the div element labeled '4', the alert popup should show the number 4). The alert popup instead shows the number 10 regardless of which div element is clicked on. How can I modify this code to make it behave in the desired way?

    Read the article

  • Why can't I get XSLT to work in chrome?

    - by Eric
    I have a XSLT sample copied straight from http://www.w3schools.com/xsl/xsl_transformation.asp, which I can't seem to view in Google Chrome. However, it seems to work fine in IE. Does anyone know why this would be? EDIT: The online version works fine, but the local copy does not.

    Read the article

  • Strategy for Storing Multiple Nullable Booleans in SQL

    - by Eric J.
    I have an object (happens to be C#) with about 20 properties that are nullable booleans. There will be perhaps a few million such objects persisted to a SQL database (currently SQL Server 2008 R2, but MySQL may need to be supported in the future). The instances themselves are relatively large because they contain about a paragraph of text as well as some other unrelated properties. For a given object instance, most of the properties will be null most of the time. When users search for instances of such objects, they will select perhaps 1-3 of the nullable boolean properties and search for instances where at least one of those 1-3 properties is non-null (OR search). My first thought is to persist the object to a single table with nullable BIT columns representing the nullable boolean properties. However, this strategy will require one index per BIT column to avoid performing a table scan when searching. Further, each index would not be particularly selective since there are only three possible values per index. Is there a better way to approach this problem?

    Read the article

  • Database/Object Mapping

    - by Eric
    Hello everyone, This is a beginner question, but it's been frustrating me... I am using C#, by the way. I'd like to make a few classes, each with their own properties and methods. I would also like to have a database to store certain instances of these classes in case I would ever need to look at them again. So, for example... class Polygon { String name; Double perimiter; int numSides; public Double GetArea() { // ... } } class Circle { String name; Double radius; public void PrintName() { // ... } } Say I've got these classes. I also want a database that has the TABLES "Polygon" and "Circle" with the COLUMNS "name" "perimeter" "radius" etc. And I want an easy way to save a class instance into the database, or pull a class instance out of the database. I have previously been using MS Access for my database stuff, which I don't mind using, but I would prefer if nothing other than .NET need to be installed. I've been researching online a bit, but I wanted to get some opinions on here. I have looked at Linq-to-Sql, but it seems you need Sql-Server. Is this true? If so, I'd really rather not use it because I don't want to have to have it installed everywhere. Anway, I'm just fishing for some ideas/insights/suggestions/etc. so please help me out if you can. Thanks.

    Read the article

  • Where's the rest of the space used in this table?

    - by Eric H.
    I'm using SQL Server 2005. I have a table whose row size should be 124 bytes. It's all ints or floats, no NULL columns (so everything is fixed width). There is only one index, clustered. The fill factor is 0. After inserting a ton of data, sp_spaceused returns the following name rows reserved data index_size unused OHLC_Bar_Trl 117076054 29807664 KB 29711624 KB 92344 KB 3696 KB which shows a rowsize of approx (29807664*1024)/117076054 = 260 bytes/row. Where's the rest of the space? Is there some DBCC command I need to run to tighten up this table (I could not insert the rows in correct index order, so maybe it's just internal fragmentation)?

    Read the article

  • Working with the IE cache from C# & WPF

    - by Eric
    I'm writing a program in C# using the WPF framework. I need to display images, and I'd like to cache them to avoid downloading them constantly. I can code my own cache, however, IE already has a caching system. I can find code to read entries out of the IE cache, however I've found nothing dealing with the issue of adding items to the cache. Is there a good way to do it, or should I just implement a separate cache?

    Read the article

  • Elegantly Handle Repetitive Property Code in C#

    - by Eric J.
    The language shortcut public string Code { get; set; } saves a bit of typing when defining trivial properties in C#. However, I find myself writing highly repetitive, not-quite-as-trivial property code that still follows a clear pattern e.g. public string Code { get { return code; } set { if (code != value) { code = value; NotifyPropertyChanged("Code"); } } } I can certainly define a Visual Studio snippet to reduce typing. However, if I need to add something to my pattern, I have to go back and change quite a bit of existing code. Is there a more elegant approach? Is a snippet the best way to go?

    Read the article

  • How to view a DataTable while debuging

    - by Eric
    I'm just getting started using ADO.NET and DataSets and DataTables. One problem I'm having is it seems pretty hard to tell what values are in the data table when trying to debug. What are some of the easiest ways of quickly seeing what values have been saved in a DataTable? Is there someway to see the contents in Visual Studio while debugging or is the only option to write the data out to a file? I've created a little utility function that will write a DataTable out to a CSV file. Yet the the resulting CSV file created was cut off. About 3 lines from what should have been the last line in the middle of writing out a System.Guid the file just stops. I can't tell if this is an issue with my CSV conversion method, or the original population of the DataTable. Update Forget the last part I just forgot to flush my stream writer.

    Read the article

  • Selecting row in SSMS causes Entity Framework 4 to Fail

    - by Eric J.
    I have a simple Entity Framework 4 unit test that creates a new record, saves it, attempts to find it, then deletes it. All works great, unless... ... I open up SQL Server Management Studio while stopped at a breakpoint in the unit test and execute a SELECT statement that returns the row I just created (not SELECT FOR UPDATE, not WITH (updlock), no transaction, just a plain SELECT). If I do that before attempting to find the row I just created, I don't find the row. If I instead do that after finding the row but before deleting the row, I do find the row but get an OptimisticConcurrencyException. This is consistently repeatable. Unit Test: [TestMethod()] public void CreateFindDeleteActiveParticipantsTest() { // Setup this test Participant utPart = CreateUTParticipant(); ctx.Participants.AddObject(utPart); ctx.SaveChanges(); // External SELECT Point #1: // part is null // Find participant Participant part = ParticipantRepository.Find(UT_SURVEY_ID, UT_TOKEN); Assert.IsNotNull(part, "Expected to find a participant"); // External SELECT Point #2: // SaveChanges throws OptimisticConcurrencyException // Cleanup this test ctx.Participants.DeleteObject(utPart); ctx.SaveChanges(); }

    Read the article

  • Generate Permutations of a List

    - by Eric Mercer
    I'm writing a function that takes a list and returns a list of permutations of the argument. I know how to do it by using a function that removes an element and then recursively use that function to generate all permutations. I now have a problem where I want to use the following function: (define (insert-everywhere item lst) (define (helper item L1 L2) (if (null? L2) (cons (append L1 (cons item '())) '()) (cons (append L1 (cons item L2)) (helper item (append L1 (cons (car L2) '())) (cdr L2))))) (helper item '() lst)) This function will insert the item into every possible location of the list, like the following: (insert-everywhere 1 '(a b)) will get: '((1 a b) (a 1 b) (a b 1)) How would I use this function to get all permutations of a list? I now have: (define (permutations lst) (if (null? lst) '() (insert-helper (car lst) (permutations (cdr lst))))) (define (insert-helper item lst) (cond ((null? lst) '()) (else (append (insert-everywhere item (car lst)) (insert-helper item (cdr lst)))))) but doing (permutations '(1 2 3)) just returns the empty list '().

    Read the article

  • database assignment

    - by eric
    Hi, Given the following table: CREATE TABLE T1 (A INTEGER NOT NULL); CREATE TABLE T3 (A SMALLINT NOT NULL); INSERT T1 VALUES (32768.5); SELECT * FROM T1; INSERT T3 SELECT * FROM T1; SELECT * FROM T3; What is the output of above query? If any error occured please declare the line of it?Explain your answer!

    Read the article

  • COde improvement

    - by eric
    Could you write this 'cleaner' ? Just a simple question from a beginner:) if(isset($_GET['tid']) && trim($_GET['tid'])!==""){ $act = 'tid'; $tid = trim($_GET['tid']); }elseif(isset($_GET['fid']) && trim($_GET['fid'])!==""){ $act = 'fid'; $fid = trim($_GET['fid']); }elseif(isset($_GET['mid']) && trim($_GET['mid'])!==""){ $act = 'mid'; }elseif(isset($_GET['act']) && trim($_GET['act'])!==""){ $act = trim($_GET['act']); }else{ $act = ""; }

    Read the article

  • Further filter SQL results

    - by eric
    I've got a query that returns a proper result set, using SQL 2005. It is as follows: select case when convert(varchar(4),datepart(yyyy,bug.datecreated),101)+ ' Q' +convert(varchar(2),datepart(qq,bug.datecreated),101) = '1969 Q4' then '2009 Q2' else convert(varchar(4),datepart(yyyy,bug.datecreated),101)+ ' Q' +convert(varchar(2),datepart(qq,bug.datecreated),101) end as [Quarter], bugtypes.bugtypename, count(bug.bugid) as [Total] from bug left outer join bugtypes on bug.crntbugtypeid = bugtypes.bugtypeid and bug.projectid = bugtypes.projectid where (bug.projectid = 44 and bug.currentowner in (-1000000031,-1000000045) and bug.crntplatformid in (42,37,25,14)) or (bug.projectid = 44 and bug.currentowner in (select memberid from groupmembers where projectid = 44 and groupid in (87,88)) and bug.crntplatformid in (42,37,25,14)) group by case when convert(varchar(4),datepart(yyyy,bug.datecreated),101)+ ' Q' +convert(varchar(2),datepart(qq,bug.datecreated),101) = '1969 Q4' then '2009 Q2' else convert(varchar(4),datepart(yyyy,bug.datecreated),101)+ ' Q' +convert(varchar(2),datepart(qq,bug.datecreated),101) end, bugtypes.bugtypename order by 1,3 desc It produces a nicely grouped list of years and quarters, an associated descriptor, and a count of incidents in descending count order. What I'd like to do is further filter this so it shows only the 10 most submitted incidents per quarter. What I'm struggling with is how to take this result set and achieve that.

    Read the article

  • SharePoint job queued, but never starts.

    - by Eric Uniacke
    I'm attempting to retract a solution from SharePoint. I've started the job via Central Admin site as well as from stsadm. The job is queued, however, it will stay in a state of pending for days. The SharePoint services - Admin and Timer service - are started. Any suggestoins of why the retraction jobs are queued, but never started? I'm really at a loss of what to try next, so anything will be helpful. Thank you!

    Read the article

  • How can I serve up color-coded Java code using PHP?

    - by Eric
    I'd like to embed code from my SVN repository into my website, using PHP. The SVN has public anonymous access, so the PHP code should be fine reading it. The code on said SVN is java, and so far I've had no luck finding a syntax-highlighter to make the code more readable. Ideally I'd like one that uses CSS classes so that I can change the colors to match the look of the website. Could someone point me to a PHP library that highlights Java code?

    Read the article

  • MS Access raise form events programmatically

    - by Eric G
    Is it possible to raise built-in MS Access form events programmatically? I have a feeling it isn't but thought I would check. (I am using Access 2003). For instance, I want to do something like this within a private sub on the form: RaiseEvent Delete(Cancel) and have it trigger the Access.Form delete event -- i.e. without actually deleting a bound record. Note my delete event is not handled by the form itself but by an external class, so I can't simply call Form_Delete(Cancel).

    Read the article

  • Optimizing a "set in a string list" to a "set as a matrix" operation

    - by Eric Fournier
    I have a set of strings which contain space-separated elements. I want to build a matrix which will tell me which elements were part of which strings. For example: "" "A B C" "D" "B D" Should give something like: A B C D 1 2 1 1 1 3 1 4 1 1 Now I've got a solution, but it runs slow as molasse, and I've run out of ideas on how to make it faster: reverseIn <- function(vector, value) { return(value %in% vector) } buildCategoryMatrix <- function(valueVector) { allClasses <- c() for(classVec in unique(valueVector)) { allClasses <- unique(c(allClasses, strsplit(classVec, " ", fixed=TRUE)[[1]])) } resMatrix <- matrix(ncol=0, nrow=length(valueVector)) splitValues <- strsplit(valueVector, " ", fixed=TRUE) for(cat in allClasses) { if(cat=="") { catIsPart <- (valueVector == "") } else { catIsPart <- sapply(splitValues, reverseIn, cat) } resMatrix <- cbind(resMatrix, catIsPart) } colnames(resMatrix) <- allClasses return(resMatrix) } Profiling the function gives me this: $by.self self.time self.pct total.time total.pct "match" 31.20 34.74 31.24 34.79 "FUN" 30.26 33.70 74.30 82.74 "lapply" 13.56 15.10 87.86 97.84 "%in%" 12.92 14.39 44.10 49.11 So my actual questions would be: - Where are the 33% spent in "FUN" coming from? - Would there be any way to speed up the %in% call? I tried turning the strings into factors prior to going into the loop so that I'd be matching numbers instead of strings, but that actually makes R crash. I've also tried going for partial matrix assignment (IE, resMatrix[i,x] <- 1) where i is the number of the string and x is the vector of factors. No dice there either, as it seems to keep on running infinitely.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >