Search Results

Search found 10459 results on 419 pages for 'elaine blog'.

Page 358/419 | < Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >

  • Page content layout issues

    - by Prupel
    I'm designing a theme for a blog and I'm having some trouble trying to get a layout working. Here's an image of what I want. This diagram represents the individual posts and not the website itself, so it will be contained in a box of it's own, lets call it .container. Also the purple and green are in another box, let's call it .content. The other elements will be called by their color for now. so here's more or less what the CSS looks like: .container { display:block; margin:0 25px; } .gray, .blue, .content { display:block; width:100%; } .purple { display:inline-block; width:125px; height:100%; text-align:center; } .green { display:inline-block; } That's all there is at the moment. I tried float but that made no effect. What's happening is something like this. Here's a few more things you should know: .container's width is NOT set it is auto .purple and .green don't necessarily need to be the same size as long as .green doesn't go to that side. .purple CAN have a set height .green is where the meat is, that's where the actual post goes, keep that in mind. I don't think tables will help, the problem is inside .content.

    Read the article

  • MSBuild / PowerShell: Copy SQL Server 2012 database to SQL Azure via BACPAC (for Continuous Integration)

    - by giveme5minutes
    I'm creating a continuous integration MSBuild script which copies a database in on-premise SQL Server 2012 to SQL Azure. Easy right? Methods After a fair bit of research I've come across the following methods: Use PowerShell to access the DAC library directly, then use the MSBuild PowerShell extension to wrap the script. This would require installing PowerShell 3 and working out how to make the MSBuild PowerShell extension work with it, as apparently MS moved the DAC API to a different namespace in the latest version of the library. PowerShell would give direct access to the API, but may require quite a bit of boilerplate. Use the sample DAC Framework Client Side Tools, which requires compiling them myself, as the downloads available from Codeplex only include the Hosted version. It would also require fixing them to use DAC 3.0 classes as they appear to currently use an earlier version of DAC. I could then call these tools from an <Exec Command="" /> in the MSBuild script. Less boilerplate and if I hit any bumps in the road I can just make changes to the source. Processes Using whichever method, the process could be either: Export from on-premise SQL Server 2012 to local BACPAC Upload BACPAC to blog storage Import BACPAC to SQL Azure via Hosted DAC Or: Export from on-premise SQL Server 2012 to local BACPAC Import BACPAC to SQL Azure via Client DAC Question All of the above seems to be quite a lot of effort for something that seems to be a standard feature... so before I start reinventing the wheel and documenting the results for all to see, is there something really obvious that I've missed here? Is there pre-written script that MS has released that I have not yet uncovered? There's an command in the GUI of SQL Server Management Studio 2012 that does EXACTLY what I'm trying to do (right click on local database, click "Tasks", click "Deploy Database to SQL Azure"). Surely if it's a few clicks in the GUI it must be a single command on the command line somewhere??

    Read the article

  • Query next/previous record

    - by Rob
    I'm trying to find a better way to get the next or previous record from a table. Let's say I have a blog or news table: CREATE TABLE news ( news_id INT UNSIGNED NOT NULL PRIMARY KEY AUTO_INCREMENT, news_datestamp DATETIME NOT NULL, news_author VARCHAR(100) NOT NULL, news_title VARCHAR(100) NOT NULL, news_text MEDIUMTEXT NOT NULL ); Now on the frontend I want navigation buttons for the next or previous records, if i'm sorting by news_id, I can do something rather simple like: SELECT MIN(news_id) AS next_news_id FROM news WHERE news_id > '$old_news_id' LIMIT 1 SELECT MAX(news_id) AS prev_news_id FROM news WHERE news_id < '$old_news_id' LIMIT 1 But the news can be sorted by any field, and I don't necessarily know which field is sorted on, so this won't work if the user sorts on news_author for example. I've resorted to the rather ugly and inefficient method of sorting the entire table and looping through all records until I find the record I need. $res = mysql_query("SELECT news_id FROM news ORDER BY `$sort_column` $sort_way"); $found = $prev = $next = 0; while(list($id) = mysql_fetch_row($res)) { if($found) { $next = $id; break; } if($id == $old_news_id) { $found = true; continue; } $prev = $id; } There's got to be a better way.

    Read the article

  • Why would ASP.NET MVC use session state?

    - by ray247
    Recommended by the ASP.NET team to use cache instead of session, we stopped using session from working with the WebForm model the last few years. So we normally have the session turned off in the web.config <sessionState mode="Off" /> But, now when I'm testing out a ASP.NET MVC application with this setting it throw an error in class SessionStateTempDataProvider inside the mvc framework, it asked me to turn on session state, I did and it worked. Looking at the source it uses session Dictionary<string, object> tempDataDictionary = httpContext.Session[TempDataSessionStateKey] as Dictionary<string, object>; // line 20 in SessionStateTempDataProvider.cs So, why would they use session here? What am I missing? Thanks, Ray. ======================================================== Edit Sorry didn't mean for this post to debate on session vs. cache, but rather in the context of the ASP.NET MVC, I was just wondering why session is used here. In this Scott Watermasysk blog post he mentioned on turning off session too as a good practice, so I'm just wondering do I have to turn it on to use MVC from here on?

    Read the article

  • Multiple IntenseDebate Comment Counts

    - by Aristotle
    I just setup IntenseDebate on my blog this evening and am, for the most part, pleased with it. One thing I did see is that they offered me a small snippet to show the current number of comments: <script> var idcomments_acct = 'abcdefgef12345678mykey8675309acdc'; var idcomments_post_id; var idcomments_post_url; </script> <script type="text/javascript" src="http://www.intensedebate.com/js/genericLinkWrapperV2.js"></script> This is nice, but what I would like to do is have something similar on my archives page where many posts are listed - not just one. Presently the page looks like this: Some Post TitleAuthor NameShort abstract from this post... Some Post TitleAuthor NameShort abstract from this post... I would like it to look like this: Some Post TitleAuthor NameShort abstract from this post...7 Comments Some Post TitleAuthor NameShort abstract from this post...3 Comments But I'm not exactly sure how I can do this with IntenseDebate. Do they offer any sort of method to gather the total number of comments for multiple pages from a single page?

    Read the article

  • What happens to existing workspaces after upgrading to TFS 2010

    - by e-mre
    Hi, I was looking for some insight about what happens to existing workspaces and files that are already checked-out on people, after an upgrade to TFS2010. Surprisingly enough I can not find any satisfactory information on this. (I am talking about upgrading on new hardware by the way. Fresh TFS instance, upgraded databases) I've checked TFS Installation guide, I searched through the web, all I could find is upgrade scenarios for the server side. Nobody even mentions what happens to source control clients. I've created a virtual machine to test the upgrade process, The upgrade was successful and all my files and workspaces exist in the new server too. The problem is: The new TFS installation has a new instanceID. When I redirected on the clients to the new server, the client seemed unable to match files and file states in the workspace with the ones on the new server. This makes me wonder if it will be possible to keep working after the production upgrade. As I mentioned above I can not find anything on this, it would be great if anyone could point me to some paper or blog post about this. Thanks in advance...

    Read the article

  • Project management and bundling dependencies

    - by Joshua
    I've been looking for ways to learn about the right way to manage a software project, and I've stumbled upon the following blog post. I've learned some of the things mentioned the hard way, others make sense, and yet others are still unclear to me. To sum up, the author lists a bunch of features of a project and how much those features contribute to a project's 'suckiness' for a lack of a better term. You can find the full article here: http://spot.livejournal.com/308370.html In particular, I don't understand the author's stance on bundling dependencies with your project. These are: == Bundling == Your source only comes with other code projects that it depends on [ +20 points of FAIL ] Why is this a problem, (especially given the last point)? If your source code cannot be built without first building the bundled code bits [ +10 points of FAIL ] Doesn't this necessarily have to be the case for software built against 3rd party libs? Your code needs that other code to be compiled into its library before the linker can work? If you have modified those other bundled code bits [ +40 points of FAIL ] If this is necessary for your project, then it naturally follows that you've bundled said code with yours. If you want to customize a build of some lib,say WxWidgets, you'll have to edit that projects build scripts to bulid the library that you want. Subsequently, you'll have to publish those changes to people who wish to build your code, so why not use a high level make script with the params already written in, and distribute that? Furthermore, (especially in a windows env) if your code base is dependent on a particular version of a lib (that you also need to custom compile for your project) wouldn't it be easier to give the user the code yourself (because in this case, it is unlikely that the user will already have the correct version installed)? So how would you respond to these comments, and what points may I be failing to take into consideration? Would you agree or disagree with the author's take (or mine), and why?

    Read the article

  • Is there a Firebug console -vsdoc.js?

    - by David Murdoch
    If not, does anyone care to write one? I would do it myself...but I don't have time right now...maybe next week (unless someone beats me to it). If you are bored and want to compile the vsdoc: Here is the Firebug API. Here is a blog post about the format for VS doc comments for intellisense. Here is an example vsdoc (jquery-1.4.1-vsdoc.js). I created the following because I kept typing cosnole instead of console. You can use it as a starting point (ish). console = { /// <summary> /// 1: The javascript console /// </summary> /// <returns type="Object" /> }; console.log = function (object) { /// <summary> /// Write to the console's log /// </summary> /// <returns type="null" /> /// <param name="object" type="Object"> /// Write the object to the console's log /// </param> };

    Read the article

  • Returning HTML in the JS portion of a respond_to block throws errors in IE

    - by Horace Loeb
    Here's a common pattern in my controller actions: respond_to do |format| format.html {} format.js { render :layout => false } end I.e., if the request is non-AJAX, I'll send the HTML content in a layout on a brand new page. If the request is AJAX, I'll send down the same content, but without a layout (so that it can be inserted into the existing page or put into a lightbox or whatever). So I'm always returning HTML in the format.js portion, yet Rails sets the Content-Type response header to text/javascript. This causes IE to throw this fun little error message: Of course I could set the content-type of the response every time I did this (or use an after_filter or whatever), but it seems like I'm trying to do something relatively standard and I don't want to add additional boilerplate code. How do I fix this problem? Alternatively, if the only way to fix the problem is to change the content-type of the response, what's the best way to achieve the behavior I want (i.e., sending down content with layout for non-AJAX and the same content without a layout for AJAX) without having to deal with these errors? Edit: This blog post has some more info

    Read the article

  • How do I put an ASP.NET website project and class library projects in one .sln file on Subversion

    - by JustinP8
    My company has several class libraries we use in multiple website projects (not web application projects). Website projects don't have .sln files, but I'm sure I've read in my past research that you can make a blank solution and put your website and class library projects in it. After answers to my previous questions, this is the direction that I'm going (based slightly on [http://amadiere.com/blog/2009/06/multiple-subversion-projects-in-one-visual-studio-solution-using-svnexternals/][1]: /websites /website1 /trunk /website1 /libraries /library1 /trunk /library1 /library2 /trunk /library2 /etc... Then I planed on using svn:externals to copy /library1, /library2, and so on into the working_copy/websites/website1/ folder. I want my team members to be able to checkout the /trunk folder for website1 and get a .sln file, /library1 external, /library2 external, etc. I want that .sln file to contain the website1 website project, and all of the library external projects. Hopefully that would look something like: /working_copy /websites /website1 /trunk /website1 /library1 (svn:external of libraries/library1/trunk/library1) /library2 (svn:external of libraries/library2/trunk/library2) /etc. website1.sln So, at the end of all of this, the goal is that my teammates check out the trunk, open the solution, and everyone has the exact same solution. When we commit, everything is committed appropriately to subversion (the website code, and the libraries are committed to their appropriate place on the repo). How have others solved these issues? How can I make a .sln file that my team members and I can share in this manner? [1]: "This Article"

    Read the article

  • Android map performance with > 800 overlays of KML data

    - by span
    I have some a shape file which I have converted to a KML file that I wish to read coordinates from and then draw paths between the coordinates on a MapView. With the help of this great post: How to draw a path on a map using kml file? I have been able to read the the KML into an ArrayList of "Placemarks". This great blog post then showed how to take a list of GeoPoints and draw a path: http://djsolid.net/blog/android---draw-a-path-array-of-points-in-mapview The example in the above post only draws one path between some points however and since I have many more paths than that I am running into some performance problems. I'm currently adding a new RouteOverlay for each of the separate paths. This results in me having over 800 overlays when they have all been added. This has a performance hit and I would love some input on what I can do to improve it. Here are some options I have considered: Try to add all the points to a List which then can be passed into a class that will extend Overlay. In that new class perhaps it would be possible to add and draw the paths in a single Overlay layer? I'm not sure on how to implement this though since the paths are not always intersecting and they have different start and end points. At the moment I'm adding each path which has several points to it's own list and then I add that to an Overlay. That results in over 700 overlays... Simplify the KML or SHP. Instead of having over 700 different paths, perhaps there is someway to merge them into perhaps 100 paths or less? Since alot of paths are intersected at some point it should be possible to modify the original SHP file so that it merges all intersections. Since I have never worked with these kinds of files before I have not been able to find a way to do this in GQIS. If someone knows how to do this I would love for some input on that. Here is a link to the group of shape files if you are interested: http://danielkvist.net/cprg_bef_cbana_polyline.shp http://danielkvist.net/cprg_bef_cbana_polyline.shx http://danielkvist.net/cprg_bef_cbana_polyline.dbf http://danielkvist.net/cprg_bef_cbana_polyline.prj Anyway, here is the code I'm using to add the Overlays. Many thanks in advance. RoutePathOverlay.java package net.danielkvist; import java.util.List; import android.graphics.Canvas; import android.graphics.Color; import android.graphics.Paint; import android.graphics.Path; import android.graphics.Point; import android.graphics.RectF; import com.google.android.maps.GeoPoint; import com.google.android.maps.MapView; import com.google.android.maps.Overlay; import com.google.android.maps.Projection; public class RoutePathOverlay extends Overlay { private int _pathColor; private final List<GeoPoint> _points; private boolean _drawStartEnd; public RoutePathOverlay(List<GeoPoint> points) { this(points, Color.RED, false); } public RoutePathOverlay(List<GeoPoint> points, int pathColor, boolean drawStartEnd) { _points = points; _pathColor = pathColor; _drawStartEnd = drawStartEnd; } private void drawOval(Canvas canvas, Paint paint, Point point) { Paint ovalPaint = new Paint(paint); ovalPaint.setStyle(Paint.Style.FILL_AND_STROKE); ovalPaint.setStrokeWidth(2); int _radius = 6; RectF oval = new RectF(point.x - _radius, point.y - _radius, point.x + _radius, point.y + _radius); canvas.drawOval(oval, ovalPaint); } public boolean draw(Canvas canvas, MapView mapView, boolean shadow, long when) { Projection projection = mapView.getProjection(); if (shadow == false && _points != null) { Point startPoint = null, endPoint = null; Path path = new Path(); // We are creating the path for (int i = 0; i < _points.size(); i++) { GeoPoint gPointA = _points.get(i); Point pointA = new Point(); projection.toPixels(gPointA, pointA); if (i == 0) { // This is the start point startPoint = pointA; path.moveTo(pointA.x, pointA.y); } else { if (i == _points.size() - 1)// This is the end point endPoint = pointA; path.lineTo(pointA.x, pointA.y); } } Paint paint = new Paint(); paint.setAntiAlias(true); paint.setColor(_pathColor); paint.setStyle(Paint.Style.STROKE); paint.setStrokeWidth(3); paint.setAlpha(90); if (getDrawStartEnd()) { if (startPoint != null) { drawOval(canvas, paint, startPoint); } if (endPoint != null) { drawOval(canvas, paint, endPoint); } } if (!path.isEmpty()) canvas.drawPath(path, paint); } return super.draw(canvas, mapView, shadow, when); } public boolean getDrawStartEnd() { return _drawStartEnd; } public void setDrawStartEnd(boolean markStartEnd) { _drawStartEnd = markStartEnd; } } MyMapActivity package net.danielkvist; import java.util.ArrayList; import java.util.Collection; import java.util.Iterator; import android.graphics.Color; import android.os.Bundle; import android.util.Log; import com.google.android.maps.GeoPoint; import com.google.android.maps.MapActivity; import com.google.android.maps.MapView; public class MyMapActivity extends MapActivity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); MapView mapView = (MapView) findViewById(R.id.mapview); mapView.setBuiltInZoomControls(true); String url = "http://danielkvist.net/cprg_bef_cbana_polyline_simp1600.kml"; NavigationDataSet set = MapService.getNavigationDataSet(url); drawPath(set, Color.parseColor("#6C8715"), mapView); } /** * Does the actual drawing of the route, based on the geo points provided in * the nav set * * @param navSet * Navigation set bean that holds the route information, incl. * geo pos * @param color * Color in which to draw the lines * @param mMapView01 * Map view to draw onto */ public void drawPath(NavigationDataSet navSet, int color, MapView mMapView01) { ArrayList<GeoPoint> geoPoints = new ArrayList<GeoPoint>(); Collection overlaysToAddAgain = new ArrayList(); for (Iterator iter = mMapView01.getOverlays().iterator(); iter.hasNext();) { Object o = iter.next(); Log.d(BikeApp.APP, "overlay type: " + o.getClass().getName()); if (!RouteOverlay.class.getName().equals(o.getClass().getName())) { overlaysToAddAgain.add(o); } } mMapView01.getOverlays().clear(); mMapView01.getOverlays().addAll(overlaysToAddAgain); int totalNumberOfOverlaysAdded = 0; for(Placemark placemark : navSet.getPlacemarks()) { String path = placemark.getCoordinates(); if (path != null && path.trim().length() > 0) { String[] pairs = path.trim().split(" "); String[] lngLat = pairs[0].split(","); // lngLat[0]=longitude // lngLat[1]=latitude // lngLat[2]=height try { if(lngLat.length > 1 && !lngLat[0].equals("") && !lngLat[1].equals("")) { GeoPoint startGP = new GeoPoint( (int) (Double.parseDouble(lngLat[1]) * 1E6), (int) (Double.parseDouble(lngLat[0]) * 1E6)); GeoPoint gp1; GeoPoint gp2 = startGP; geoPoints = new ArrayList<GeoPoint>(); geoPoints.add(startGP); for (int i = 1; i < pairs.length; i++) { lngLat = pairs[i].split(","); gp1 = gp2; if (lngLat.length >= 2 && gp1.getLatitudeE6() > 0 && gp1.getLongitudeE6() > 0 && gp2.getLatitudeE6() > 0 && gp2.getLongitudeE6() > 0) { // for GeoPoint, first:latitude, second:longitude gp2 = new GeoPoint( (int) (Double.parseDouble(lngLat[1]) * 1E6), (int) (Double.parseDouble(lngLat[0]) * 1E6)); if (gp2.getLatitudeE6() != 22200000) { geoPoints.add(gp2); } } } totalNumberOfOverlaysAdded++; mMapView01.getOverlays().add(new RoutePathOverlay(geoPoints)); } } catch (NumberFormatException e) { Log.e(BikeApp.APP, "Cannot draw route.", e); } } } Log.d(BikeApp.APP, "Total overlays: " + totalNumberOfOverlaysAdded); mMapView01.setEnabled(true); } @Override protected boolean isRouteDisplayed() { // TODO Auto-generated method stub return false; } } Edit: There are of course some more files I'm using but that I have not posted. You can download the complete Eclipse project here: http://danielkvist.net/se.zip

    Read the article

  • how does one _model_ data from relational databases in clojure ?

    - by sandeep
    I have asked this question on twitter as well the #clojure IRC channel, yet got no responses. There have been several articles about Clojure-for-Ruby-programmers, Clojure-for-lisp-programmers.. but what is the missing part is Clojure for ActiveRecord programmers . There have been articles about interacting with MongoDB, Redis, etc. - but these are key value stores at the end of the day. However, coming from a Rails background, we are used to thinking about databases in terms of inheritance - has_many, polymorphic, belongs_to, etc. The few articles about Clojure/Compojure + MySQL (ffclassic) - delve right into sql. Of course, it might be that an ORM induces impedence mismatch, but the fact remains that after thinking like ActiveRecord, it is very difficult to think any other way. I believe that relational DBs, lend themselves very well to the object-oriented paradigm because of them being , essentially, Sets. Stuff like activerecord is very well suited for modelling this data. For e.g. a blog - simply put class Post < ActiveRecord::Base has_many :comments end class Comment < ActiveRecord::Base belongs_to :post end How does one model this in Clojure - which is so strictly anti-OO ? Perhaps the question would have been better if it referred to all functional programming languages, but I am more interested from a Clojure standpoint (and Clojure examples)

    Read the article

  • How is a referencing environment generally implemented for closures?

    - by Alexandr Kurilin
    Let's say I have a statically/lexically scoped language with deep binding and I create a closure. The closure will consist of the statements I want executed plus the so called referencing environment, or, to quote this post, the collection of variables which can be used. What does this referencing environment actually look like implementation-wise? I was recently reading about ObjectiveC's implementation of blocks, and the author suggests that behind the scenes you get a copy of all of the variables on the stack and also of all the references to heap objects. The explanation claims that you get a "snapshot" of the referencing environment at the point in time of the closure's creation. Is that more or less what happens, or did I misread that? Is anything done to "freeze" a separate copy of the heap objects, or is it safe to assume that if they get modified between closure creation and the closure executing, the closure will no longer be operating on the original version of the object? If indeed there's copying being made, are there memory usage considerations in situations where one might want to create plenty of closures and store them somewhere? I think that misunderstanding of some of these concepts might lead to tricky issues like the ones Eric Lippert mentions in this blog post. It's interesting because you'd think that it wouldn't make sense to keep a reference to a value type that might be gone by the time the closure is called, but I'm guessing that in C# the compiler will figure out that the variable is needed later and put it into the heap instead. It seems that in most memory-managed languages everything is a reference and thus ObjectiveC is a somewhat unique situation with having to deal with copying what's on the stack.

    Read the article

  • How to specialize a type parameterized argument to multiple different types for in Scala?

    - by jmount
    I need a back-check (please). In an article ( http://www.win-vector.com/blog/2010/06/automatic-differentiation-with-scala/ ) I just wrote I stated that it is my belief in Scala that you can not specify a function that takes an argument that is itself a function with an unbound type parameter. What I mean is you can write: def g(f:Array[Double]=>Double,Array[Double]):Double but you can not write something like: def g(f[Y]:Array[Y]=>Double,Array[Double]):Double because Y is not known. The intended use is that inside g() I will specialize fY to multiple different types at different times. You can write: def g[Y](f:Array[Y]=>Double,Array[Double]):Double but then f() is of a single type per call to g() (which is exactly what we do not want). However, you can get all of the equivalent functionality by using a trait extension instead insisting on passing around a function. What I advocated in my article was: 1) Creating a trait that imitates the structure of Scala's Function1 trait. Something like: abstract trait VectorFN { def apply[Y](x:Array[Y]):Y } 2) declaring def g(f:VectorFN,Double):Double (using the trait is the type). This works (people here on StackOverflow helped me find it, and I am happy with it)- but am I mis-representing Scala by missing an even better solution?

    Read the article

  • Implicit Type Conversions in Reflection

    - by bradhe
    So I've written a quick bit of code to help quickly convert between business objects and view models. Not to pimp my own blog, but you can find the details here if you're interested or need to know. One issue I've run in to is that I have a custom collection type, ProductCollection, and I need to turn that in to a string[] in on my model. Obviously, since there is no default implicit cast, I'm getting an exception in my contract converter. So, I thought I would write the next little bit of code and that should solve the problem: public static implicit operator string[](ProductCollection collection) { var list = new List<string>(); foreach (var product in collection) { if (product.Id == null) { list.Add(null); } else { list.Add(product.Id.ToString()); } } return list.ToArray(); } However, it still fails with the same cast exception. I'm wondering if it has something to do with being in reflection? If so, is there anything that I can do here?? I'm open to architectural solutions, too!

    Read the article

  • XSS attack prevention

    - by Colby77
    Hi, I'm developing a web app where users can response to blog entries. This is a security problem because they can send dangerous data that will be rendered to other users (and executed by javascript). They can't format the text they send. No "bold", no colors, no nothing. Just simple text. I came up with this regex to solve my problem: [^\\w\\s.?!()] So anything that is not a word character (a-Z, A-Z, 0-9), not a whitespace, ".", "?", "!", "(" or ")" will be replaced with an empty string. Than every quatation mark will be replaced with: "&quot". I check the data on the front end and I check it on my server. Is there any way somebody could bypass this "solution"? I'm wondering how StackOverflow does this thing? There are a lot of formatting here so they must do a good work with it.

    Read the article

  • Heroku app throws "Internal Server Error"

    - by picardo
    This app works just fine on my local computer. After pushing it to Heroku, static pages appear to be working but the blog section throws an Internal Server Error. I pulled the logs by running "heroku logs" and this is what I get: ==> production.log <== /usr/ruby1.8.7/lib/ruby/gems/1.8/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in `run' /home/slugs/215194_e5b887e_c999/mnt/.bundle/gems/gems/thin-1.2.7/lib/thin/backends/base.rb:57:in `start' /home/slugs/215194_e5b887e_c999/mnt/.bundle/gems/gems/thin-1.2.7/lib/thin/server.rb:156:in `start' /home/slugs/215194_e5b887e_c999/mnt/.bundle/gems/gems/thin-1.2.7/lib/thin/controllers/controller.rb:80:in `start' /home/slugs/215194_e5b887e_c999/mnt/.bundle/gems/gems/thin-1.2.7/lib/thin/runner.rb:177:in `send' /home/slugs/215194_e5b887e_c999/mnt/.bundle/gems/gems/thin-1.2.7/lib/thin/runner.rb:177:in `run_command' /home/slugs/215194_e5b887e_c999/mnt/.bundle/gems/gems/thin-1.2.7/lib/thin/runner.rb:143:in `run!' /home/slugs/215194_e5b887e_c999/mnt/.bundle/gems/gems/thin-1.2.7/bin/thin:6 Something wrong with the eventmachine gem, I suppose....but it works fine on my machine. So I'm not sure what's going on or how to debug it.

    Read the article

  • How to add on key down and on key up event in java script

    - by Ramesh
    Hello , I am creating an live search for my blog..i got this from w3 schools and i need to add on keyboard up,down mouse up and down event ... <html> <head> <script type="text/javascript"> function showResult(str) { if (str.length==0) { document.getElementById("livesearch").innerHTML=""; document.getElementById("livesearch").style.border="0px"; return; } if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { document.getElementById("livesearch").innerHTML=xmlhttp.responseText; document.getElementById("livesearch").style.border="1px solid #A5ACB2"; } } xmlhttp.open("GET","livesearch.php?q="+str,true); xmlhttp.send(); } </script> </head> <body> <form> <input type="text" size="30" onkeyup="showResult(this.value)" /> <div id="livesearch"></div> </form> </body> </html>

    Read the article

  • Validate HAML from ActiveRecord: scope/controller/helpers for link_to etc?

    - by Chris Boyle
    I like HAML. So much, in fact, that in my first Rails app, which is the usual blog/CMS thing, I want to render the body of my Page model using HAML. So here is app/views/pages/_body.html.haml: .entry-content= Haml::Engine.new(body, :format => :html5).render ...and it works (yay, recursion). What I'd like to do is validate the HAML in the body when creating or updating a Page. I can almost do that, but I'm stuck on the scope argument to render. I have this in app/models/page.rb: validates_each :body do |record, attr, value| begin Haml::Engine.new(value, :format => :html5).render(record) rescue Exception => e record.errors.add attr, "line #{(e.respond_to? :line) && e.line || 'unknown'}: #{e.message}" end end You can see I'm passing record, which is a Page, but even that doesn't have a controller, and in particular doesn't have any helpers like link_to, so as soon as a Page uses any of that it's going to fail to validate even when it would actually render just fine. So I guess I need a controller as scope for this, but accessing that from here in the model (where the validator is) is a big MVC no-no, and as such I don't think Rails gives me a way to do it. (I mean, I suppose I could stash a controller in some singleton somewhere or something, but... excuse me while I throw up.) What's the least ugly way to properly validate HAML in an ActiveRecord validator?

    Read the article

  • Flex 3 - Image cache

    - by BS_C3
    Hello Community. I'm doing an Image Cache following this method: http://www.brandondement.com/blog/2009/08/18/creating-an-image-cache-with-actionscript-3/ I copied the two as classes, renaming them CachedImage and CachedImageMap. The thing is that I don't want to store the image after being loaded a first time, but while the application is being loaded. For that, I've created a function that is called by the application pre-initialize event. This is how it looks: private function loadImages():void { var im:CachedImage = new CachedImage; var sources:ArrayCollection = new ArrayCollection; for each(var cs in divisionData.division.collections.collection.collectionSelection) { sources.addItem(cs.toString()); } for each(var se in divisionData.division.collections.collection.searchEngine) { sources.addItem(se.toString()); } for each( var source:String in sources) { im.source = source; im.load(source); } } The sources are properly retrieved. However, even if I use the load method, I do not get the "complete" event... As if the image is not being loaded... How is that? Any help would be appreciated. Thanks in advance. Regards, BS_C3

    Read the article

  • wordpress generating slow mysql queries - is it index problem?

    - by tash
    Hello Stack Overflow I've got very slow Mysql queries coming up from my wordpress site. It's making everything slow and I think this is eating up CPU usage. I've pasted the Explain results for the two most frequently problematic queries below. This is a typical result - although very occasionally teh queries do seem to be performed at a more normal speed. I have the usual wordpress indexes on the database tables. You will see that one of the queries is generated from wordpress core code, and not from anything specific - like the theme - for my site. I have a vague feeling that the database is not always using the indexes/is not using them properly... Is this right? Does anyone know how to fix it? Or is it a different problem entirely? Many thanks in advance for any help anyone can offer - it is hugely appreciated Query: [wp-blog-header.php(14): wp()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort Query time: 34.2829 (ms) 9) Query: [wp-content/themes/LMHR/index.php(40): query_posts()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.ID NOT IN ( SELECT tr.object_id FROM wp_term_relationships AS tr INNER JOIN wp_term_taxonomy AS tt ON tr.term_taxonomy_id = tt.term_taxonomy_id WHERE tt.taxonomy = 'category' AND tt.term_id IN ('217', '218', '223', '224') ) AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort 2 DEPENDENT SUBQUERY tr ref PRIMARY,term_taxonomy_id PRIMARY 8 func 1 Using index 2 DEPENDENT SUBQUERY tt eq_ref PRIMARY,term_id_taxonomy,taxonomy PRIMARY 8 antin1_lovemusic2010.tr.term_taxonomy_id 1 Using where Query time: 70.3900 (ms)

    Read the article

  • how to check null value of Integer type field in ASP.NET MVC view?

    - by Vikas
    Hi, I have integer type field in database which is having property "Not Null". when i create a view & do a validation, if i left that field blank, it will consider it as 0 so i can not compare it with 0 because if someone insert a value 0 then it will be considered as error! one another problem is that i am using Model error as described in the book "ASP.NET MVC 1.0" @ Scott Gu blog. And I am checking the value in partial class of object (created by LINQ-To-SQL). i.e public partial class Person { public bool IsValid { get { return (GetRuleViolations().Count() == 0); } } public IEnumerable<RuleViolation> GetRuleViolations() { if (String.IsNullOrEmpty(Name)) yield return new RuleViolation("Name is Required", "Name"); if (Age == 0) yield return new RuleViolation("Age is Required", "Age"); yield break; } partial void OnValidate(ChangeAction action) { if (!IsValid) throw new ApplicationException("Rule violations prevent saving"); } } There is also problem with range. Like in database if i declared as smallint i.e. short in c#, now if i exceed that range then it gives error as "A Value is reguired". so finally is there any best way for validation in ASP.NET MVC?

    Read the article

  • Wordpress inserting comments via wp_insert_comment()

    - by Cyber Junkie
    Hello all happy holidays! :) I'm trying to insert comments in my wordpress blog via the wp_insert_comment() function. It's for a plugin I'm trying to make. I have this code in my header for testing. It works every time I refresh the page. $agent = $_SERVER['HTTP_USER_AGENT']; $data = array( 'comment_post_ID' => 256, 'comment_author' => 'Dave', 'comment_author_email' => '[email protected]', 'comment_author_url' => 'http://www.someiste.com', 'comment_content' => 'Lorem ipsum dolor sit amet...', 'comment_author_IP' => '127.3.1.1', 'comment_agent' => $agent, 'comment_date' => date('Y-m-d H:i:s'), 'comment_date_gmt' => date('Y-m-d H:i:s'), 'comment_approved' => 1, ); $comment_id = wp_insert_comment($data); It successfully inserts comments into the database. The problem: Comments don't show via the Disqus comment system. I compared table rows and I noticed that user_agent differs. Normal comments use for example, Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv... and Disqus comments use Disqus/1.1(2.61):119598902 numbers are different for each comment. Does anyone know how to insert comments with wp_insert_comment() when Disqus is enabled?

    Read the article

  • redirecting the root domain - SEO and other issues, need some guidance!

    - by Jim Sp
    I'm not familiar with some of these forwarding methods and I need help. My issue is this: I have a site hosted on discountasp.net. My domain was registered through 1&1 and I redirected the DNS to what discountasp.net wanted. So when a user types www.mydomain.com, he/she sees the ASP.NET site hosted on discountasp.net, which is all fine My main page is Index.aspx, I really suck at html page design and I don't have time or the talent to fiddle with it (or money to get it done by a pro). The rest of the pages are fine. I want to use a good theme from tumblr or bloggr - one of the blog sites and create a page that I want to use as the first page - directly on blogger or tumblr - say yyy.blogspot.com (I have many reasons, so for now please don't bash my decision - let's just say that's what I want). That means when a user types www.mydomain.com, it should redirect it to the blogger or tumblr page. Everything else stays the sme - the links on the blogger page will say www.mydomain.com/xxxx and show up what's on the hosted website. I have setup the IIS rewrite rules etc. etc. so that all works just fine The bottom line is I want to show an external site's web page as my root page. I suppose I'm struggling to even explain what I want! I can of course do a response.redirect on the Index.aspx page - which is the simplest way to manage this, but the big question is will this hurt SEO in some way? If not, that would be what I do and leave the rest of the infrastructure intact (I have already done this to test and it works fine) Thank you very much j

    Read the article

  • Generating URLs when not using an integer as an id?

    - by Synthesezia
    So I'm building a blog engine which has /articles/then-the-article-permalink as it's URL structure. I need to have prev and next links which will jump to the next article by pub_date, my code looks like this: In my articles#show @article = Article.find_by_permalink(params[:id]) @prev_article = Article.find(:first, :conditions => [ "pub_date < ?", @article.pub_date]) @next_picture = Article.find(:first, :conditions => [ "pub_date > ?", @article.pub_date]) And in my show.html.erb <%= link_to "Next", article_path(@next_article) %> <%= link_to 'Prev', article_path(@prev_article) %> In my articles model I have this: def to_param self.permalink end The specific error message I get is: article_url failed to generate from {:action=>"show", :controller=>"articles", :id=>nil}, expected: {:action=>"show", :controller=>"articles"}, diff: {:id=>nil} Without the prev and next everything is working fine but I'm out of ideas as to why this isn't working. Anyone want to helo?

    Read the article

< Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >