Search Results

Search found 3053 results on 123 pages for 'resolution'.

Page 115/123 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Why does filter: blur(0) still cause text to blur under Webkit?

    - by johnkavanagh
    I've come across a bug today that's taken far longer than I would like to admit to identify. Essentially: setting a filter: blur(0) (or the vendor-specific -webkit-filter) on an element should - I believe - mean that no form of blur is applied. However, having tested this today, it would appear that Webkit based browsers still blur the text within any element with either blur(0) or blur(0px) assigned to it. I've knocked together a quick Fiddle here: http://jsfiddle.net/f9rBE/ These are three identical dixs containing text (no custom fonts): This has absolutely nothing assigned Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam facilisis orci in quam venenatis, in tempus ipsum sagittis. Suspendisse potenti. Donec ullamcorper lacus vel odio accumsan, vel aliquam libero tempor. Praesent nec libero venenatis, ultrices arcu non, luctus quam. Morbi scelerisque sit amet turpis sit amet tincidunt. Praesent semper erat non purus pretium consequat. Aenean et iaculis turpis. Curabitur diam tellus, consectetur non massa et, commodo venenatis metus. One has no styles at all assigned, the other two have blur(0) and blur(0px): .no-blur{} .zero-px-blur{ -webkit-filter: blur(0px); -moz-filter: blur(0px); -o-filter: blur(0px); -ms-filter: blur(0px); filter: blur(0px); } .zero-blur{ -webkit-filter: blur(0); -moz-filter: blur(0); -o-filter: blur(0); -ms-filter: blur(0); filter: blur(0); } If you preview this under Chrome/Safari you'll see that the text in the second two are still blurred: A few things worth noting: This unintentional blurring occurs in Safari on iOS7 devices (both iPhones and iPads); It also occurs on Chrome and Safari under OSX; It doesn't happen under FireFox in OSX. Of course, this isn't supported at all in Firefox just yet so it's hard to tell whether the behaviour I'm seeing is intentional/expected behaviour, or whether this is a bug in Webkit? Is it possible that this is only prevalent in higher-density resolution devices (ie: retina MacBook/iPhone/iPad)? With this in mind, how do you actually overwrite an item that has blur applied to it to set it back to non-blurred?

    Read the article

  • IoC/DI in the face of winforms and other generated code

    - by Kaleb Pederson
    When using dependency injection (DI) and inversion of control (IoC) objects will typically have a constructor that accepts the set of dependencies required for the object to function properly. For example, if I have a form that requires a service to populate a combo box you might see something like this: // my files public interface IDataService { IList<MyData> GetData(); } public interface IComboDataService { IList<MyComboData> GetComboData(); } public partial class PopulatedForm : BaseForm { private IDataService service; public PopulatedForm(IDataService service) { //... InitializeComponent(); } } This works fine at the top level, I just use my IoC container to resolve the dependencies: var form = ioc.Resolve<PopulatedForm>(); But in the face of generated code, this gets harder. In winforms a second file composing the rest of the partial class is generated. This file references other components, such as custom controls, and uses no-args constructors to create such controls: // generated file: PopulatedForm.Designer.cs public partial class PopulatedForm { private void InitializeComponent() { this.customComboBox = new UserCreatedComboBox(); // customComboBox has an IComboDataService dependency } } Since this is generated code, I can't pass in the dependencies and there's no easy way to have my IoC container automatically inject all the dependencies. One solution is to pass in the dependencies of each child component to PopulatedForm even though it may not need them directly, such as with the IComboDataService required by the UserCreatedComboBox. I then have the responsibility to make sure that the dependencies are provided through various properties or setter methods. Then, my PopulatedForm constructor might look as follows: public PopulatedForm(IDataService service, IComboDataService comboDataService) { this.service = service; InitializeComponent(); this.customComboBox.ComboDataService = comboDataService; } Another possible solution is to have the no-args constructor to do the necessary resolution: public class UserCreatedComboBox { private IComboDataService comboDataService; public UserCreatedComboBox() { if (!DesignMode && IoC.Instance != null) { comboDataService = Ioc.Instance.Resolve<IComboDataService>(); } } } Neither solution is particularly good. What patterns and alternatives are available to more capably handle dependency-injection in the face of generated code? I'd love to see both general solutions, such as patterns, and ones specific to C#, Winforms, and Autofac.

    Read the article

  • c# Xna keydown with a delay of 1 sec

    - by bld
    Hi!. im writing a tetris in xna. i have a class with a method rotateBlocks. When i press the "Up" arrow key. i wanna have that when i hold the button down for 1 sec or more that it executes the arguments in the first else if(rotating the blocks fast) right now nothing is happening. i have declared oldState globally in the class. if i remove the gametime check in first the else if the block will rotate fast imedietley. if i try to step through the code with linebreaks the resolution get f****d up public void RotateBlocks(loadBlock lb, KeyboardState newState, GameTime gameTime) { _elapsedSeconds2 += (float)gameTime.ElapsedGameTime.TotalSeconds; if (lb._name.Equals("block1")) { if (newState.IsKeyDown(Keys.Up) && !oldState.IsKeyDown(Keys.Up)) { // the player just pressed Up if (_rotated) { lb._position[0].X -= 16; lb._position[0].Y -= 16; lb._position[2].X += 16; lb._position[2].Y += 16; lb._position[3].X += 32; lb._position[3].Y += 32; _rotated = false; } else if (!_rotated) { lb._position[0].X += 16; lb._position[0].Y += 16; lb._position[2].X -= 16; lb._position[2].Y -= 16; lb._position[3].X -= 32; lb._position[3].Y -= 32; _rotated = true; } } if (newState.IsKeyDown(Keys.Up) && oldState.IsKeyDown(Keys.Up)) { // the player is holding the key down if (gameTime.ElapsedGameTime.TotalSeconds >=1) { if (_rotated) { lb._position[0].X -= 16; lb._position[0].Y -= 16; lb._position[2].X += 16; lb._position[2].Y += 16; lb._position[3].X += 32; lb._position[3].Y += 32; _rotated = false; } else if (!_rotated) { lb._position[0].X += 16; lb._position[0].Y += 16; lb._position[2].X -= 16; lb._position[2].Y -= 16; lb._position[3].X -= 32; lb._position[3].Y -= 32; _rotated = true; } _elapsedSeconds2 = 0; } }

    Read the article

  • jquery anchor to html extract

    - by Benjamin Ortuzar
    I would like to implement something similar to the Google quick scroll extension with jquery for the extracts of a search result, so when the full document is opened (within the same website) it gives the user the opportunity to go straight to the extract location. Here is a sample of what I get returned from the search engine when I search for 'food'. <doc> <docid>129305</docid> <title><span class='highlighted'>Food</span></title> <summary> <summarytext>Papers subject to Negative Resolution: 4 <span class='highlighted'>Food</span> <span class='highlighted'>Food</span> Irradiation (England) Regulations 2009 (S.I., 2009, No. 1584), dated 24 June 2009 (by Act), </summarytext> </summary> <paras> <paraitemcount>2</paraitemcount> <para> <paraitem>1</paraitem> <paraid>42</paraid> <pararelevance>100</pararelevance> <paraweights>50</paraweights> <paratext>4 <span class='highlighted'>Food</span></paratext> </para> <para> <paraitem>2</paraitem> <paraid>54</paraid> <pararelevance>100</pararelevance> <paraweights>50</paraweights> <paratext><span class='highlighted'>Food</span> Irradiation (England) Regulations 2009 (S.I., 2009, No. 1584), dated 24 June 2009 (by Act), with an Explanatory Memorandum and an Impact Assessment (</paratext> </para> </paras> </doc> As you see the search engine has returned a document that contains one summary and two extracts. So let's say the user clicks on the second extract in the search resutls page, the browser would open the detailed document in the same website, and would offer the user the possibility to go to the extract as the Google quick scroll extension does. Is there an existing jquery script for this? If not, can you suggest any jquery/javascript code that would simplify my task to implement this. Notes: I can access the extracts from the document details page. I'm aware that the HTML in some cases could be slightly different in the extract than in the details page, finding no match. The search engine does not return where the extract was located. At the moment I'm trying to understand the JS code that the extension uses.

    Read the article

  • SQLiteException and SQLite error near "(": syntax error with Subsonic ActiveRecord

    - by nvuono
    I ran into an interesting error with the following LiNQ query using LiNQPad and when using Subsonic 3.0.x w/ActiveRecord within my project and wanted to share the error and resolution for anyone else who runs into it. The linq statement below is meant to group entries in the tblSystemsValues collection into their appropriate system and then extract the system with the highest ID. from ksf in KeySafetyFunction where ksf.Unit == 2 && ksf.Condition_ID == 1 join sys in tblSystems on ksf.ID equals sys.KeySafetyFunction join xval in (from t in tblSystemsValues group t by t.tblSystems_ID into groupedT select new { sysId = groupedT.Key, MaxID = groupedT.Max(g=>g.ID), MaxText = groupedT.First(gt2 => gt2.ID == groupedT.Max(g=>g.ID)).TextValue, MaxChecked = groupedT.First(gt2 => gt2.ID == groupedT.Max(g=>g.ID)).Checked }) on sys.ID equals xval.sysId select new {KSFDesc=ksf.Description, sys.Description, xval.MaxText, xval.MaxChecked} On its own, the subquery for grouping into groupedT works perfectly and the query to match up KeySafetyFunctions with their System in tblSystems also works perfectly on its own. However, when trying to run the completed query in linqpad or within my project I kept running into a SQLiteException SQLite Error Near "(" First I tried splitting the queries up within my project because I knew that I could just run a foreach loop over the results if necessary. However, I continued to receive the same exception! I eventually separated the query into three separate parts before I realized that it was the lazy execution of the queries that was killing me. It then became clear that adding the .ToList() specifier after the myProtectedSystem query below was the key to avoiding the lazy execution after combining and optimizing the query and being able to get my results despite the problems I encountered with the SQLite driver. // determine the max Text/Checked values for each system in tblSystemsValue var myProtectedValue = from t in tblSystemsValue.All() group t by t.tblSystems_ID into groupedT select new { sysId = groupedT.Key, MaxID = groupedT.Max(g => g.ID), MaxText = groupedT.First(gt2 => gt2.ID ==groupedT.Max(g => g.ID)).TextValue, MaxChecked = groupedT.First(gt2 => gt2.ID ==groupedT.Max(g => g.ID)).Checked}; // get the system description information and filter by Unit/Condition ID var myProtectedSystem = (from ksf in KeySafetyFunction.All() where ksf.Unit == 2 && ksf.Condition_ID == 1 join sys in tblSystem.All() on ksf.ID equals sys.KeySafetyFunction select new {KSFDesc = ksf.Description, sys.Description, sys.ID}).ToList(); // finally join everything together AFTER forcing execution with .ToList() var joined = from protectedSys in myProtectedSystem join protectedVal in myProtectedValue on protectedSys.ID equals protectedVal.sysId select new {protectedSys.KSFDesc, protectedSys.Description, protectedVal.MaxChecked, protectedVal.MaxText}; // print the gratifying debug results foreach(var protectedItem in joined) { System.Diagnostics.Debug.WriteLine(protectedItem.Description + ", " + protectedItem.KSFDesc + ", " + protectedItem.MaxText + ", " + protectedItem.MaxChecked); }

    Read the article

  • How to convert a 32bpp image to an indexed format?

    - by Ed Swangren
    So here are the details (I am using C# BTW): I receive a 32bpp image (JPEG compressed) from a server. At some point, I would like to use the Palette property of a bitmap to color over-saturated pixels (brightness 240) red. To do so, I need to get the image into an indexed format. I have tried converting the image to a GIF, but I get quality loss. I have tried creating a new bitmap in an index format by these methods: // causes a "Parameter not valid" error Bitmap indexed = new Bitmap(orig.Width, orig.Height, PixelFormat.Indexed) // no error, but the resulting image is black due to information loss I assume Bitmap indexed = new Bitmap(orig.Width, orig.Height, PixelFormat.Format8bppIndexed) I am at a loss now. The data in this image is changed constantly by the user, so I don't want to manually set pixels that have a brightness 240 if I can avoid it. If I can set the palette once when the image is created, my work is done. If I am going about this the wrong way to begin with please let me know. EDIT: Thanks guys, here is some more detail on what I am attempting to accomplish. We are scanning a tissue slide at high resolution (pathology application). I write the interface to the actual scanner. We use a line-scan camera. To test the line rate of the camera, the user scans a very small portion and looks at the image. The image is displayed next to a track bar. When the user moves the track bar (adjusting line rate), I change the overall intensity of the image in an attempt to model what it would look like at the new line rate. I do this using an ImageAttributes and ColorMatrix object currently. When the user adjusts the track bar, I adjust the matrix. This does not give me per pixel information, but the performance is very nice. I could use LockBits and some unsafe code here, but I would rather not rewrite it if possible. When the new image is created, I would like for all pixels with a brightness value of 240 to be colored red. I was thinking that defining a palette for the bitmap up front would be a clean way of doing this.

    Read the article

  • HowTo stick QDialog to Screen Borders like Skype do?

    - by mosg
    Hello. A long time ago I tried to find method how to stick QDialog window to screen borders for my small projects like Skype windows do it, but I failed. May be I was looking this code not in the right place, so now I'm looking the solution here, on stack! :) So, does any one have a deal with some kind of such code, links, samples? In my opinion, we have to reimplement QDialog moveEvent function, like below, but that code does not working: void CDialog::moveEvent(QMoveEvent * event) { QRect wndRect; int leftTaskbar = 0, rightTaskbar = 0, topTaskbar = 0, bottomTaskbar = 0; // int top = 0, left = 0, right = 0, bottom = 0; wndRect = this->frameGeometry(); // Screen resolution int screenWidth = QApplication::desktop()->width(); int screenHeight = QApplication::desktop()->height(); int wndWidth = wndRect.right() - wndRect.left(); int wndHeight = wndRect.bottom() - wndRect.top(); int posX = event->pos().x(); int posY = event->pos().y(); // Snap to screen border // Left border if (posX >= -m_nXOffset + leftTaskbar && posX <= leftTaskbar + m_nXOffset) { //left = leftTaskbar; this->move(leftTaskbar, posY); return; } // Top border if (posY >= -m_nYOffset && posY <= topTaskbar + m_nYOffset) { //top = topTaskbar; this->move(posX, topTaskbar); return; } // Right border if (posX + wndWidth <= screenWidth - rightTaskbar + m_nXOffset && posX + wndWidth >= screenWidth - rightTaskbar - m_nXOffset) { //right = screenWidth - rightTaskbar - wndWidth; this->move(screenWidth - rightTaskbar - wndWidth, posY); return; } // Bottom border if (posY + wndHeight <= screenHeight - bottomTaskbar + m_nYOffset && posY + wndHeight >= screenHeight - bottomTaskbar - m_nYOffset) { //bottom = screenHeight - bottomTaskbar - wndHeight; this->move(posX, screenHeight - bottomTaskbar - wndHeight); return; } QDialog::moveEvent(event); } Thanks.

    Read the article

  • Installing Enclosure with Netbeans

    - by jdknewb
    Hi all, I am having trouble installing Enclosure and getting it to work. I have followed this guide http://www.enclojure.org/gettingstarted and successfully installed Enclosure (I think). However, when I try to build the sample application (labrepl) I get a bunch of errors and a failed build. I haven't used Java in a long time and I've never used Netbeans, and the error doesn't seem very helpful with my limited knowledge of this domain. I'm using the latest Netbeans and the Enclosure URL from the guide. Since I am on Windows, I can't use git to clone the repo, so I'm not sure what to do from here. Anyway, here are the error messages. WARNING: You are running embedded Maven builds, some build may fail due to incompatibilities with latest Maven release. To set Maven instance to use for building, click here. Scanning for projects... [#process-resources] [resources:resources] Using default encoding to copy filtered resources. [#compile] [ERROR]Transitive dependency resolution for scope: compile has failed for your project. [ERROR]Error message: Missing: [ERROR]---------- [ERROR]1) org.clojure:clojure-contrib:jar:1.2.0-master-SNAPSHOT [ERROR] Try downloading the file manually from the project website. [ERROR] Then, install it using the command: [ERROR] mvn install:install-file -DgroupId=org.clojure -DartifactId=clojure-contrib -Dversion=1.2.0-master-SNAPSHOT -Dpackaging=jar -Dfile=/path/to/file [ERROR] Alternatively, if you host your own repository you can deploy the file there: [ERROR] mvn deploy:deploy-file -DgroupId=org.clojure -DartifactId=clojure-contrib -Dversion=1.2.0-master-SNAPSHOT -Dpackaging=jar -Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id] [ERROR] Path to dependency: [ERROR] 1) labrepl:labrepl:jar:0.0.1 [ERROR] 2) org.clojure:clojure-contrib:jar:1.2.0-master-SNAPSHOT [ERROR]---------- [ERROR]1 required artifact is missing. [ERROR]for artifact: [ERROR] labrepl:labrepl:jar:0.0.1 [ERROR]from the specified remote repositories: [ERROR] central (http://repo1.maven.org/maven2), [ERROR] clojars (http://clojars.org/repo/), [ERROR] incanter (http://repo.incanter.org), [ERROR] clojure-snapshots (http://build.clojure.org/snapshots), [ERROR] clojure (http://build.clojure.org/releases), [ERROR] clojure-releases (http://build.clojure.org/releases) [ERROR]Group-Id: labrepl [ERROR]Artifact-Id: labrepl [ERROR]Version: 0.0.1 [ERROR]From file: C:\Users\chloey\Documents\NetBeansProjects\RelevanceLabRepl\pom.xml ------------------------------------------------------------------------ For more information, run with the -e flag ------------------------------------------------------------------------ BUILD FAILED ------------------------------------------------------------------------ Total time: 1 second Finished at: Wed Jun 09 21:53:04 CDT 2010 Final Memory: 72M/172M ------------------------------------------------------------------------ Thanks all.

    Read the article

  • derby + hibernate ConstraintViolationException using manytomany relationships

    - by user364470
    Hi, I'm new to Hibernate+Derby... I've seen this issue mentioned throughout the google, but have not seen a proper resolution. This following code works fine with mysql, but when I try this on derby i get exceptions: ( each Tag has two sets of files and vise-versa - manytomany) Tags.java @Entity @Table(name="TAGS") public class Tags implements Serializable { @Id @GeneratedValue(strategy=GenerationType.AUTO) public long getId() { return id; } @ManyToMany(targetEntity=Files.class ) @ForeignKey(name="USER_TAGS_FILES",inverseName="USER_FILES_TAGS") @JoinTable(name="USERTAGS_FILES", joinColumns=@JoinColumn(name="TAGS_ID"), inverseJoinColumns=@JoinColumn(name="FILES_ID")) public Set<data.Files> getUserFiles() { return userFiles; } @ManyToMany(mappedBy="autoTags", targetEntity=data.Files.class) public Set<data.Files> getAutoFiles() { return autoFiles; } Files.java @Entity @Table(name="FILES") public class Files implements Serializable { @Id @GeneratedValue(strategy=GenerationType.AUTO) public long getId() { return id; } @ManyToMany(mappedBy="userFiles", targetEntity=data.Tags.class) public Set getUserTags() { return userTags; } @ManyToMany(targetEntity=Tags.class ) @ForeignKey(name="AUTO_FILES_TAGS",inverseName="AUTO_TAGS_FILES") @JoinTable(name="AUTOTAGS_FILES", joinColumns=@JoinColumn(name="FILES_ID"), inverseJoinColumns=@JoinColumn(name="TAGS_ID")) public Set getAutoTags() { return autoTags; } I add some data to the DB, but when running over Derby these exception turn up (the don't using mysql) Exceptions SEVERE: DELETE on table 'FILES' caused a violation of foreign key constraint 'USER_FILES_TAGS' for key (3). The statement has been rolled back. Jun 10, 2010 9:49:52 AM org.hibernate.event.def.AbstractFlushingEventListener performExecutions SEVERE: Could not synchronize database state with session org.hibernate.exception.ConstraintViolationException: could not delete: [data.Files#3] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:96) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.persister.entity.AbstractEntityPersister.delete(AbstractEntityPersister.java:2712) at org.hibernate.persister.entity.AbstractEntityPersister.delete(AbstractEntityPersister.java:2895) at org.hibernate.action.EntityDeleteAction.execute(EntityDeleteAction.java:97) at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:268) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:260) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:184) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1206) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:613) at org.hibernate.context.ThreadLocalSessionContext$TransactionProtectionWrapper.invoke(ThreadLocalSessionContext.java:344) at $Proxy13.flush(Unknown Source) at data.HibernateORM.removeFile(HibernateORM.java:285) at data.DataImp.removeFile(DataImp.java:195) at booting.DemoBootForTestUntilTestClassesExist.main(DemoBootForTestUntilTestClassesExist.java:62) I have never used derby before so maybe there is something crutal that i'm missing 1) what am I doing wrong? 2) is there any way of cascading properly when I have 2 many-to-many relationships between two classes? Thanks!

    Read the article

  • RenderTargetBitmap + Resource'd VisualBrush = incomplete image

    - by Will
    I've found a new twist on the "Visual to RenderTargetBitmap" question! I'm rendering previews of WPF stuff for a designer. That means I need to take a WPF visual and render it to a bitmap without that visual ever being displayed. Got a nice little method to do it like to see it here it goes private static BitmapSource CreateBitmapSource(FrameworkElement visual) { Border b = new Border { Width = visual.Width, Height = visual.Height }; b.BorderBrush = Brushes.Black; b.BorderThickness = new Thickness(1); b.Background = Brushes.White; b.Child = visual; b.Measure(new Size(b.Width, b.Height)); b.Arrange(new Rect(b.DesiredSize)); RenderTargetBitmap rtb = new RenderTargetBitmap( (int)b.ActualWidth, (int)b.ActualHeight, 96, 96, PixelFormats.Pbgra32); // intermediate step here to ensure any VisualBrushes are rendered properly DrawingVisual dv = new DrawingVisual(); using (var dc = dv.RenderOpen()) { var vb = new VisualBrush(b); dc.DrawRectangle(vb, null, new Rect(new Point(), b.DesiredSize)); } rtb.Render(dv); return rtb; } Works fine, except for one leeetle thing... if my FrameworkElement has a VisualBrush, that brush doesn't end up in the final rendered bitmap. Something like this: <UserControl.Resources> <VisualBrush x:Key="LOLgo"> <VisualBrush.Visual> <!-- blah blah --> <Grid Background="{StaticResource LOLgo}"> <!-- yadda yadda --> Everything else renders to the bitmap, but that VisualBrush just won't show. The obvious google solutions have been attempted and have failed. Even the ones that specifically mention VisualBrushes missing from RTB'd bitmaps. I have a sneaky suspicion this might be caused by the fact that its a Resource, and that lazy resource isn't being inlined. So a possible fix would be to, somehow(???), force resolution of all static resource references before rendering. But I have absolutely no idea how to do that. Anybody have a fix for this?

    Read the article

  • Pruning data for better viewing on loglog graph - Matlab

    - by Geodesic
    Hi Guys, just wondering if anyone has any ideas about an issue I'm having. I have a fair amount of data that needs to be displayed on one graph. Two theoretical lines that are bold and solid are displayed on top, then 10 experimental data sets that converge to these lines are graphed, each using a different identifier (eg the + or o or a square etc). These graphs are on a log scale that goes up to 1e6. The first few decades of the graph (< 1e3) look fine, but as all the datasets converge ( 1e3) it's really difficult to see what data is what. There's over 1000 data points points per decade which I can prune linearly to an extent, but if I do this too much the lower end of the graph will suffer in resolution. What I'd like to do is prune logarithmically, strongest at the high end, working back to 0. My question is: how can I get a logarithmically scaled index vector rather than a linear one? My initial assumption was that as my data is lenear I could just use a linear index to prune, which lead to something like this (but for all decades): //%grab indicies per decade ind12 = find(y >= 1e1 & y <= 1e2); indlow = find(y < 1e2); indhigh = find(y > 1e4); ind23 = find(y >+ 1e2 & y <= 1e3); ind34 = find(y >+ 1e3 & y <= 1e4); //%We want ind12 indexes in this decade, find spacing tot23 = round(length(ind23)/length(ind12)); tot34 = round(length(ind34)/length(ind12)); //%grab ones to keep ind23keep = ind23(1):tot23:ind23(end); ind34keep = ind34(1):tot34:ind34(end); indnew = [indlow' ind23keep ind34keep indhigh']; loglog(x(indnew), y(indnew)); But this causes the prune to behave in a jumpy fashion obviously. Each decade has the number of points that I'd like, but as it's a linear distribution, the points tend to be clumped at the high end of the decade on the log scale. Any ideas on how I can do this?

    Read the article

  • union marshalling issue in C#

    - by senthil
    I have union inside structure and the structure looks like struct tDeviceProperty { DWORD Tag; DWORD Size; union _DP value; }; typedef union _DP { short int i; LONG l; ULONG ul; float flt; double dbl; BOOL b; double at; FILETIME ft; LPSTR lpszA; LPWSTR lpszW; LARGE_INTEGER li; struct tBinary bin; BYTE reserved[40]; } __UDP; struct tBinary { ULONG size; BYTE * bin; }; from the tBinary structure bin has to be converted to tImage (structure is given below) struct tImage { DWORD x; DWORD y; DWORD z; DWORD Resolution; DWORD type; DWORD ID; diccid_t SourceID; const void *buffer; const char *Info; const char *UserImageID; }; to use the same in c# I have done marshaling but not giving proper values when converting the pointer to structure. The C# code is follows, tBinary tBin = new tBinary(); IntPtr tBinbuffer = Marshal.AllocCoTaskMem(Marshal.SizeOf(tBin)); Marshal.StructureToPtr(tBin.bin, tBinbuffer, false); tDeviceProperty tDevice = new tDeviceProperty(); tDevice.bin = tBinbuffer; IntPtr tDevicebuffer = Marshal.AllocCoTaskMem(Marshal.SizeOf(tDevice)); Marshal.StructureToPtr(tDevice.bin, tDevicebuffer, false); Battary tbatt = new Battary(); tbatt.value = tDevicebuffer; IntPtr tbattbuffer = Marshal.AllocCoTaskMem(Marshal.SizeOf(tbatt)); Marshal.StructureToPtr(tbatt.value, tbattbuffer, false); result = GetDeviceProperty(ref tbattbuffer); Battary v = (Battary)Marshal.PtrToStructure(tbattbuffer, typeof(Battary)); tDeviceProperty v2 = (tDeviceProperty)Marshal.PtrToStructure(tDevicebuffer, typeof(tDeviceProperty)); tBinary v3 = (tBinary)Marshal.PtrToStructure(tBinbuffer, typeof(tBinary));

    Read the article

  • Estimating the boundary of arbitrarily distributed data

    - by Dave
    I have two dimensional discrete spatial data. I would like to make an approximation of the spatial boundaries of this data so that I can produce a plot with another dataset on top of it. Ideally, this would be an ordered set of (x,y) points that matplotlib can plot with the plt.Polygon() patch. My initial attempt is very inelegant: I place a fine grid over the data, and where data is found in a cell, a square matplotlib patch is created of that cell. The resolution of the boundary thus depends on the sampling frequency of the grid. Here is an example, where the grey region are the cells containing data, black where no data exists. OK, problem solved - why am I still here? Well.... I'd like a more "elegant" solution, or at least one that is faster (ie. I don't want to get on with "real" work, I'd like to have some fun with this!). The best way I can think of is a ray-tracing approach - eg: from xmin to xmax, at y=ymin, check if data boundary crossed in intervals dx y=ymin+dy, do 1 do 1-2, but now sample in y An alternative is defining a centre, and sampling in r-theta space - ie radial spokes in dtheta increments. Both would produce a set of (x,y) points, but then how do I order/link neighbouring points them to create the boundary? A nearest neighbour approach is not appropriate as, for example (to borrow from Geography), an isthmus (think of Panama connecting N&S America) could then close off and isolate regions. This also might not deal very well with the holes seen in the data, which I would like to represent as a different plt.Polygon. The solution perhaps comes from solving an area maximisation problem. For a set of points defining the data limits, what is the maximum contiguous area contained within those points To form the enclosed area, what are the neighbouring points for the nth point? How will the holes be treated in this scheme - is this erring into topology now? Apologies, much of this is me thinking out loud. I'd be grateful for some hints, suggestions or solutions. I suspect this is an oft-studied problem with many solution techniques, but I'm looking for something simple to code and quick to run... I guess everyone is, really! Cheers, David

    Read the article

  • How do I create a Spring 3 + Tiles 2 webapp using REST-ful URLs?

    - by Ichiro Furusato
    I'm having a heck of a time resolving URLs with Spring 3.0 MVC. I'm just building a HelloWorld to try out how to build a RESTful webapp in Spring, nothing theoretically complicated. All of the examples I've been able to find are based on configurations that pay attention to file extensions ("*.htm" or "*.do"), include an artificial directory name prefix ("/foo") or even prefix paths with a dot (ugly), all approaches that use some artificial regex pattern as a signal to the resolver. For a REST approach I want to avoid all that muck and use only the natural URL patterns of my application. I would assume (perhaps incorrectly) that in web.xml I'd set a url-pattern of "/*" and pass everything to the DispatcherServlet for resolution, then just rely on URL patterns in my controller. I can't reliably get my resolver(s) to catch the URL patterns, and in all my trials this results in a resource not found error, a stack overflow (loop), or some kind of opaque Spring 3 ServletException stack trace — one of my ongoing frustrations with Spring generally is that the error messages are not often very helpful. I want to work with a Tiles 2 resolver. I've located my *.jsp files in WEB-INF/views/ and have a single line index.jsp file at the application root redirecting to the index file set by my layout.xml (the Tiles 2 Configurer). I do all the normal Spring 3 high-level configuration: <mvc:annotation-driven /> <mvc:view-controller path="/" view-name="index"/> <context:component-scan base-package="com.acme.web.controller" /> ...followed by all sorts of combinations and configurations of UrlBasedViewResolver, InternalResourceViewResolver, UrlFilenameViewController, etc. with all manner of variantions in my Tiles 2 configuration file. Then in my controller I've trying to pick up my URL patterns. Problem is, I can't reliably even get the resolver(s) to catch the patterns to send to my controller. This has now stretched to multiple days with no real progress on something I thought would be very simple to implement. I'm perhaps trying to do too much at once, though I would think this should be a simple (almost a default) configuration. I'm just trying to create a simple HelloWorld-type application, I wouldn't expect this is rocket science. Rather than me post my own configurations (which have ranged all over the map), does anyone know of an online example that: shows a simple Spring 3 MVC + Tiles 2 web application that uses REST-ful URLs (i.e., avoiding forced URL patterns such as file extensions, added directory names or dots) and relies solely on Spring 3 code/annotations (i.e., nothing outside of Spring MVC itself) to accomplish this? Is there an easy way to do this? Thanks very much for any help.

    Read the article

  • Stored procedure performance randomly plummets; trivial ALTER fixes it. Why?

    - by gWiz
    I have a couple of stored procedures on SQL Server 2005 that I've noticed will suddenly take a significantly long time to complete when invoked from my ASP.NET MVC app running in an IIS6 web farm of four servers. Normal, expected completion time is less than a second; unexpected anomalous completion time is 25-45 seconds. The problem doesn't seem to ever correct itself. However, if I ALTER the stored procedure (even if I don't change anything in the procedure, except to perhaps add a space to the script created by SSMS Modify command), the completion time reverts to expected completion time. IIS and SQL Server are running on separate boxes, both running Windows Server 2003 R2 Enterprise Edition. SQL Server is Standard Edition. All machines have dual Xeon E5450 3GHz CPUs and 4GB RAM. SQL Server is accessed using its TCP/IP protocol over gigabit ethernet (not sure what physical medium). The problem is present from all web servers in the web farm. When I invoke the procedure from a query window in SSMS on my development machine, the procedure completes in normal time. This is strange because I was under the impression that SSMS used the same SqlClient driver as in .NET. When I point my development instance of the web app to the production database, I again get the anomalous long completion time. If my SqlCommand Timeout is too short, I get System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Question: Why would performing ALTER on the stored procedure, without actually changing anything in it, restore the completion time to less than a second, as expected? Edit: To clarify, when the procedure is running slow for the app, it simultaneously runs fine in SSMS with the same parameters. The only difference I can discern is login credentials (next time I notice the behavior, I'll be checking from SSMS with the same creds). The ultimate goal is to get the procs to sustainably run with expected speed without requiring occasional intervention. Resolution: I wanted to to update this question in case others are experiencing this issue. Following the leads of the answers below, I was able to consistently reproduce this behavior. In order to test, I utilize sp_recompile and pass it one of the susceptible sprocs. I then initiate a website request from my browser that will invoke the sproc with atypical parameters. Lastly, I initiate a website request to a page that invokes the sproc with typical parameters, and observe that the request does not complete because of a SQL timeout on the sproc invocation. To resolve this on SQL Server 2005, I've added OPTIMIZE FOR hints to my SELECT. The sprocs that were vulnerable all have the "all-in-one" pattern described in this article. This pattern is certainly not ideal but was a necessary trade-off given the timeframe for the project.

    Read the article

  • Linux, GNU GCC, ld, version scripts and the ELF binary format -- How does it work??

    - by themoondothshine
    Hey all, I'm trying to learn more about library versioning in Linux and how to put it all to work. Here's the context: -- I have two versions of a dynamic library which expose the same set of interfaces, say libsome1.so and libsome2.so. -- An application is linked against libsome1.so. -- This application uses libdl.so to dynamically load another module, say libmagic.so. -- Now libmagic.so is linked against libsome2.so. Obviously, without using linker scripts to hide symbols in libmagic.so, at run-time all calls to interfaces in libsome2.so are resolved to libsome1.so. This can be confirmed by checking the value returned by libVersion() against the value of the macro LIB_VERSION. -- So I try next to compile and link libmagic.so with a linker script which hides all symbols except 3 which are defined in libmagic.so and are exported by it. This works... Or at least libVersion() and LIB_VERSION values match (and it reports version 2 not 1). -- However, when some data structures are serialized to disk, I noticed some corruption. In the application's directory if I delete libsome1.so and create a soft link in its place to point to libsome2.so, everything works as expected and the same corruption does not happen. I can't help but think that this may be caused due to some conflict in the run-time linker's resolution of symbols. I've tried many things, like trying to link libsome2.so so that all symbols are alised to symbol@@VER_2 (which I am still confused about because the command nm -CD libsome2.so still lists symbols as symbol and not symbol@@VER_2)... Nothing seems to work!!! Help!!!!!! Edit: I should have mentioned it earlier, but the app in question is Firefox, and libsome1.so is libsqlite3.so shipped with it. I don't quite have the option of recompiling them. Also, using version scripts to hide symbols seems to be the only solution right now. So what really happens when symbols are hidden? Do they become 'local' to the SO? Does rtld have no knowledge of their existence? What happens when an exported function refers to a hidden symbol?

    Read the article

  • Custom WM profile - issues with codec

    - by dominolog
    Hello I create my custom WM encoder profile. The reason I need a custom, non standard WM profile is that I need that the video resolution must be the same as input video stream. I created below profile but after I encode my video and audio with it, the WMP while loading says that the WMV1 codec is not found and prompts me for downloading WM encoder codecs. After installing them, the problem still exists. <profile version="589824" storageformat="1" name="mReplay Hi-End profile; WM Format 9; Audio &amp; Video" description="Streams: 1 audio 1 video"> <streamconfig majortype="{73647561-0000-0010-8000-00AA00389B71}" streamnumber="1" streamname="Audio Stream" inputname="Audio409" bitrate="320008" bufferwindow="-1" reliabletransport="0" decodercomplexity="" rfc1766langid="en-us" > <wmmediatype subtype="{00000161-0000-0010-8000-00AA00389B71}" bfixedsizesamples="1" btemporalcompression="0" lsamplesize="14861"> <waveformatex wFormatTag="353" nChannels="2" nSamplesPerSec="44100" nAvgBytesPerSec="40001" nBlockAlign="14861" wBitsPerSample="16" codecdata="008800000F0035E80000"/> </wmmediatype> </streamconfig> <streamconfig majortype="{73646976-0000-0010-8000-00AA00389B71}" streamnumber="2" streamname="Video Stream" inputname="Video409" bitrate="100000" bufferwindow="-1" reliabletransport="0" decodercomplexity="AU" rfc1766langid="en-us" vbrenabled="1" vbrquality="95" bitratemax="0" bufferwindowmax="0"> <videomediaprops maxkeyframespacing="80000000" quality="100"/> <wmmediatype subtype="{31564D57-0000-0010-8000-00AA00389B71}" bfixedsizesamples="0" btemporalcompression="1" lsamplesize="0"> <videoinfoheader dwbitrate="100000" dwbiterrorrate="0" avgtimeperframe="400000"> <rcsource left="0" top="0" right="0" bottom="0"/> <rctarget left="0" top="0" right="0" bottom="0"/> <bitmapinfoheader biwidth="0" biheight="0" biplanes="1" bibitcount="24" bicompression="WMV1" bisizeimage="0" bixpelspermeter="0" biypelspermeter="0" biclrused="0" biclrimportant="0"/> </videoinfoheader> </wmmediatype> </streamconfig> <streamprioritization> <stream number="1" mandatory="0"/> <stream number="2" mandatory="0"/> </streamprioritization> </profile>

    Read the article

  • IE7 and IE8: Float clearing without adding empty elements

    - by tk-421
    Hello, I'm having a problem similar to the one described here (without a resolution): http://stackoverflow.com/questions/2467745/ie7-float-and-clear-on-the-same-element The following HTML renders as intended in Firefox but not in (both) IE7 and IE8: <html> <head> <style> ul { list-style-type: none; } li { clear: both; padding: 5px; } .left { clear: left; float: left; } .middle { clear: none; float: left; } .right { clear: right; float: left; } </style> </head> <body> <ul> <li>1</li> <li class="left">2</li> <li class="right">3</li> <li class="left">4</li> <li class="middle">5</li> <li class="right">6</li> <li>7</li> </ul> </body> </html> This is a form layout, and in Firefox the results appear like: 1 2 3 4 5 6 7 That's what I'm going for. In IE7 and IE8 however, the results are: 1 2 3 5 6 4 7 [Note: I don't want to float anything to the right because I want the fields on my form to left-align correctly, without a giant space in-between the floated fields to account for the parent container's width.] Apparently I need a full clear, and can probably add an empty list-item element to the list to force clearing, but that seems like a dumb solution and sort of defeats the purpose. Any ideas? I've spent a few hours reading and trying different options without success.

    Read the article

  • MySQL search for user and their roles

    - by Jenkz
    I am re-writing the SQL which lets a user search for any other user on our site and also shows their roles. An an example, roles can be "Writer", "Editor", "Publisher". Each role links a User to a Publication. Users can take multiple roles within multiple publications. Example table setup: "users" : user_id, firstname, lastname "publications" : publication_id, name "link_writers" : user_id, publication_id "link_editors" : user_id, publication_id Current psuedo SQL: SELECT * FROM ( (SELECT user_id FROM users WHERE firstname LIKE '%Jenkz%') UNION (SELECT user_id FROM users WHERE lastname LIKE '%Jenkz%') ) AS dt JOIN (ROLES STATEMENT) AS roles ON roles.user_id = dt.user_id At the moment my roles statement is: SELECT dt2.user_id, dt2.publication_id, dt.role FROM ( (SELECT 'writer' AS role, link_writers.user_id, link_writers.publication_id FROM link_writers) UNION (SELECT 'editor' AS role, link_editors.user_id, link_editors.publication_id FROM link_editors) ) AS dt2 The reason for wrapping the roles statement in UNION clauses is that some roles are more complex and require a table join to find the publication_id and user_id. As an example "publishers" might be linked accross two tables "link_publishers": user_id, publisher_group_id "link_publisher_groups": publisher_group_id, publication_id So in that instance, the query forming part of my UNION would be: SELECT 'publisher' AS role, link_publishers.user_id, link_publisher_groups.publication_id FROM link_publishers JOIN link_publisher_groups ON lpg.group_id = lp.group_id I'm pretty confident that my table setup is good (I was warned off the one-table-for-all system when researching the layout). My problem is that there are now 100,000 rows in the users table and upto 70,000 rows in each of the link tables. Initial lookup in the users table is fast, but the joining really slows things down. How can I only join on the relevant roles? -------------------------- EDIT ---------------------------------- Explain above (open in a new window to see full resolution). The bottom bit in red, is the "WHERE firstname LIKE '%Jenkz%'" the third row searches WHERE CONCAT(firstname, ' ', lastname) LIKE '%Jenkz%'. Hence the large row count, but I think this is unavoidable, unless there is a way to put an index accross concatenated fields? The green bit at the top just shows the total rows scanned from the ROLES STATEMENT. You can then see each individual UNION clause (#6 - #12) which all show a large number of rows. Some of the indexes are normal, some are unique. It seems that MySQL isn't optimizing to use the dt.user_id as a comparison for the internal of the UNION statements. Is there any way to force this behaviour? Please note that my real setup is not publications and writers but "webmasters", "players", "teams" etc.

    Read the article

  • Planning to create PDF files in Ruby on Rails

    - by deau
    Hi there, A Ruby on Rails app will have access to a number of images and fonts. The images are components of a visual layout which will be stored separately as a set of rules. The rules specify document dimensions along with which images are used and where. The app needs to take these rules, fetch the images, and generate a PDF that is ready for local printing or emailing. The fonts will also be important. The user needs to customize the layout by inputting text which will be included in the PDF. The PDF must therefore also contain the desired font so that the document renders identically across different machines. Each PDF may have many pages. Each page may have different dimensions but this is not essential. Either way, the ability to manipulate the dimensions and margins given by the PDF is essential. The only thing that needs to be regularly changed is the text. If this is takes too much development then the app can store the layouts in 3rd party PDFs and edit the textual content directly. Eventually though, this will prove too restrictive on the apps intended functionality so I would prefer the app to generate the PDF's itself. I have never worked with PDFs before and, for the most part, I've never had to output anything to the user outside their monitor. A printed medium could require a very different approach to get the best results. If anyone has any advice on how to model the PDF format this it would be really appreciated. The technical aspects of printing such as bleed, resolution and colour have already been factored in to the layouts and images. I am aware that PDF is a proprietary file format and I want to use free or open source software. I have seen a number of Ruby libraries for generating PDF files but because I am new on this scene I have no way to reliably compare them and too little time to implement and test them all. I also have the option of using C to handle this feature and if this is process intensive then that might be preferred. What should I be thinking about and how should I be planning to implement this?

    Read the article

  • Project Euler #18 - how to brute force all possible paths in tree-like structure using Python?

    - by euler user
    Am trying to learn Python the Atlantic way and am stuck on Project Euler #18. All of the stuff I can find on the web (and there's a LOT more googling that happened beyond that) is some variation on 'well you COULD brute force it, but here's a more elegant solution'... I get it, I totally do. There are really neat solutions out there, and I look forward to the day where the phrase 'acyclic graph' conjures up something more than a hazy, 1 megapixel resolution in my head. But I need to walk before I run here, see the state, and toy around with the brute force answer. So, question: how do I generate (enumerate?) all valid paths for the triangle in Project Euler #18 and store them in an appropriate python data structure? (A list of lists is my initial inclination?). I don't want the answer - I want to know how to brute force all the paths and store them into a data structure. Here's what I've got. I'm definitely looping over the data set wrong. The desired behavior would be to go 'depth first(?)' rather than just looping over each row ineffectually.. I read ch. 3 of Norvig's book but couldn't translate the psuedo-code. Tried reading over the AIMA python library for ch. 3 but it makes too many leaps. triangle = [ [75], [95, 64], [17, 47, 82], [18, 35, 87, 10], [20, 4, 82, 47, 65], [19, 1, 23, 75, 3, 34], [88, 2, 77, 73, 7, 63, 67], [99, 65, 4, 28, 6, 16, 70, 92], [41, 41, 26, 56, 83, 40, 80, 70, 33], [41, 48, 72, 33, 47, 32, 37, 16, 94, 29], [53, 71, 44, 65, 25, 43, 91, 52, 97, 51, 14], [70, 11, 33, 28, 77, 73, 17, 78, 39, 68, 17, 57], [91, 71, 52, 38, 17, 14, 91, 43, 58, 50, 27, 29, 48], [63, 66, 4, 68, 89, 53, 67, 30, 73, 16, 69, 87, 40, 31], [04, 62, 98, 27, 23, 9, 70, 98, 73, 93, 38, 53, 60, 4, 23], ] def expand_node(r, c): return [[r+1,c+0],[r+1,c+1]] all_paths = [] my_path = [] for i in xrange(0, len(triangle)): for j in xrange(0, len(triangle[i])): print 'row ', i, ' and col ', j, ' value is ', triangle[i][j] ??my_path = somehow chain these together??? if my_path not in all_paths all_paths.append(my_path) Answers that avoid external libraries (like itertools) preferred.

    Read the article

  • treeview size in wpf

    - by AComputert
    please let me know how can I re size height of tree view control when screen resolution is changed? please see this code: <TreeView Name="treeView1" Height="150" VerticalAlignment="Top"> <TreeViewItem Header="Root" IsExpanded="True"> <TreeViewItem Header="Item 1"></TreeViewItem> <TreeViewItem Header="Item 2"></TreeViewItem> <TreeViewItem Header="Item 3"></TreeViewItem> <TreeViewItem Header="Item 4"></TreeViewItem> <TreeViewItem Header="Item 5"></TreeViewItem> <TreeViewItem Header="Item 6"></TreeViewItem> <TreeViewItem Header="Item 7"></TreeViewItem> <TreeViewItem Header="Item 8"></TreeViewItem> <TreeViewItem Header="Item 9"></TreeViewItem> <TreeViewItem Header="Item 10"></TreeViewItem> <TreeViewItem Header="Item 11"></TreeViewItem> <TreeViewItem Header="Item 12"></TreeViewItem> <TreeViewItem Header="Item 13"></TreeViewItem> <TreeViewItem Header="Item 14"></TreeViewItem> <TreeViewItem Header="Item 15"></TreeViewItem> <TreeViewItem Header="Item 16"></TreeViewItem> <TreeViewItem Header="Item 17"></TreeViewItem> <TreeViewItem Header="Item 18"></TreeViewItem> <TreeViewItem Header="Item 19"></TreeViewItem> <TreeViewItem Header="Item 20"></TreeViewItem> <TreeViewItem Header="Item 21"></TreeViewItem> <TreeViewItem Header="Item 22"></TreeViewItem> <TreeViewItem Header="Item 23"></TreeViewItem> <TreeViewItem Header="Item 24"></TreeViewItem> <TreeViewItem Header="Item 24"></TreeViewItem> </TreeViewItem> </TreeView> in some screen resolutions i can see all nodse and in some resolutions i see a scroll bar. I want to see all nodes without scroll bar.

    Read the article

  • best alternative to in-definition initialization of static class members? (for SVN keywords)

    - by Jeff
    I'm storing expanded SVN keyword literals for .cpp files in 'static char const *const' class members and want to store the .h descriptions as similarly as possible. In short, I need to guarantee single instantiation of a static member (presumably in a .cpp file) to an auto-generated non-integer literal living in a potentially shared .h file. Unfortunately the language makes no attempt to resolve multiple instantiations resulting from assignments made outside class definitions and explicitly forbids non-integer inits inside class definitions. My best attempt (using static-wrapping internal classes) is not too dirty, but I'd really like to do better. Does anyone have a way to template the wrapper below or have an altogether superior approach? // Foo.h: class with .h/.cpp SVN info stored and logged statically class Foo { static Logger const verLog; struct hInfoWrap; public: static hInfoWrap const hInfo; static char const *const cInfo; }; // Would like to eliminate this per-class boilerplate. struct Foo::hInfoWrap { hInfoWrapper() : text("$Id$") { } char const *const text; }; ... // Foo.cpp: static inits called here Foo::hInfoWrap const Foo::hInfo; char const *const Foo::cInfo = "$Id$"; Logger const Foo::verLog(Foo::cInfo, Foo::hInfo.text); ... // Helper.h: output on construction, with no subsequent activity or stored fields class Logger { Logger(char const *info1, char const *info2) { cout << info0 << endl << info1 << endl; } }; Is there a way to get around the static linkage address issue for templating the hInfoWrap class on string literals? Extern char pointers assigned outside class definitions are linguistically valid but fail in essentially the same manner as direct member initializations. I get why the language shirks the whole resolution issue, but it'd be very convenient if an inverted extern member qualifier were provided, where the definition code was visible in class definitions to any caller but only actually invoked at the point of a single special declaration elsewhere. Anyway, I digress. What's the best solution for the language we've got, template or otherwise? Thanks!

    Read the article

  • PHP-Mcrypt Installation

    - by Infinity
    I need to install php-mcrypt on my CentOS 5.5 VPS, When I try yum install php-mcrypt, it says that it is set to be updated which implies that it is already installed. I looked in the /usr/lib/php/modules and cant find the .so file. Anyway I want to update it but yum is giving the following error, I am running PHP-FPM on Nginx. Last login: Thu Apr 21 12:13:30 2011 from cpc2-seve18-2-0-cust438.13-3.cable.virginmedia.com [root@infinity ~]# yum install php-mcrypt Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package php-mcrypt.i386 0:5.1.6-15.el5.centos.1 set to be updated --> Processing Dependency: php-api = 20041225 for package: php-mcrypt --> Processing Dependency: php >= 5.1.6 for package: php-mcrypt --> Running transaction check ---> Package php.i386 0:5.1.6-27.el5_5.3 set to be updated --> Processing Dependency: php-common = 5.1.6-27.el5_5.3 for package: php --> Processing Dependency: php-cli = 5.1.6-27.el5_5.3 for package: php ---> Package php-mcrypt.i386 0:5.1.6-15.el5.centos.1 set to be updated --> Processing Dependency: php-api = 20041225 for package: php-mcrypt --> Running transaction check ---> Package php.i386 0:5.1.6-27.el5_5.3 set to be updated --> Processing Dependency: php-common = 5.1.6-27.el5_5.3 for package: php ---> Package php-cli.i386 0:5.1.6-27.el5_5.3 set to be updated --> Processing Dependency: php-common = 5.1.6-27.el5_5.3 for package: php-cli ---> Package php-mcrypt.i386 0:5.1.6-15.el5.centos.1 set to be updated --> Processing Dependency: php-api = 20041225 for package: php-mcrypt --> Finished Dependency Resolution php-mcrypt-5.1.6-15.el5.centos.1.i386 from extras has depsolving problems --> Missing Dependency: php-api = 20041225 is needed by package php-mcrypt-5.1.6-15.el5.centos.1.i386 (extras) php-5.1.6-27.el5_5.3.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-27.el5_5.3 is needed by package php-5.1.6-27.el5_5.3.i386 (base) php-cli-5.1.6-27.el5_5.3.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-27.el5_5.3 is needed by package php-cli-5.1.6-27.el5_5.3.i386 (base) Error: Missing Dependency: php-api = 20041225 is needed by package php-mcrypt-5.1.6-15.el5.centos.1.i386 (extras) Error: Missing Dependency: php-common = 5.1.6-27.el5_5.3 is needed by package php-cli-5.1.6-27.el5_5.3.i386 (base) Error: Missing Dependency: php-common = 5.1.6-27.el5_5.3 is needed by package php-5.1.6-27.el5_5.3.i386 (base) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. [root@infinity ~]# Any ideas?

    Read the article

  • FFSERVER - streaming an ASF video as Webm output

    - by Emmanuel Brunet
    I'm trying to stream an IP webcam ASF live stream to a ffserver to output a webm video format. The server starts successfully but the ffserver commands used to feed the ffserver fails and generates a core dump. Environment Debian 7.5 ffmpeg 2.2 Input stream $ ffprobe http://account:password@webcam/videostream.asf Input #0, asf, from 'http://admin:alpha1237@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 25 tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, 1 channels, s16p, 32 kb/s ffserver configuration my ffserver configuration is : Port 8091 RTSPPort 554 BindAddress 192.168.1.62 MaxHTTPConnections 1000 MaxClients 100 MaxBandwidth 1000 CustomLog - <Feed webcam.ffm> File /tmp/webcam.ffm FileMaxSize 500M ACL allow localhost ACL allow 192.168.0.0 192.168.255.255 </Feed> <Stream webcam.webm> # Output stream URL definition Feed webcam.ffm # Feed from which to receive video Format webm # Audio settings AudioCodec vorbis AudioBitRate 64 # Audio bitrate # Video settings VideoCodec libvpx VideoSize 640x480 # Video resolution VideoFrameRate 25 # Video FPS AVOptionVideo flags +global_header # Parameters passed to encoder # (same as ffmpeg command-line parameters) AVOptionVideo cpu-used 0 AVOptionVideo qmin 10 AVOptionVideo qmax 42 AVOptionVideo quality good AVOptionAudio flags +global_header PreRoll 15 StartSendOnKey # VideoBitRate 32 # Video bitrate </Stream> <Stream status.html> Format status # Only allow local people to get the status ACL allow localhost ACL allow 192.168.0.0 192.168.255.255 </Stream> ffmpeg feed I run the following command that fails $ ffmpeg -i http://account:password@webcam/videostream.asf http://192.168.1.62:8091/webcam.ffm http://192.168.1.62:8091/webcam.ffm Input #0, asf, from 'http://account:password@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 25 tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, mono, s16p, 32 kb/s [swscaler @ 0x36a80c0] deprecated pixel format used, make sure you did set range correctly Segmentation fault I tryed $ ffmpeg -i http://account:password@webcam/videostream.asf -pix_fmt yuv420p http://192.168.1.62:8091/webcam.ffm But it raises the same error. Thanks for your help Edit For an easy testing (I thought), I tried to publish the whole ASF stream as is, meaning connecting the ASF webcam output stream to the ffserver that outputs ASF format too. And thus with mirrored encoding so I changed the ffserver configuration to ... <Stream webcam.asf> Feed webcam.ffm Format asf VideoFrameRate 25 VideoSize 640X480 VideoBitRate 256 VideoBufferSize 1000 VideoGopSize 30 AudioBitRate 32 StartSendOnKey </Stream> ... And the output is now : Input #0, asf, from 'http://admin:alpha1237@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 1k tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, mono, s16p, 32 kb/s [swscaler @ 0x3d620c0] deprecated pixel format used, make sure you did set range correctly Output #0, ffm, to 'http://192.168.1.62:8091/webcam.ffm': Metadata: creation_time : now encoder : Lavf55.40.100 Stream #0:0: Audio: wmav2, 22050 Hz, mono, fltp, 32 kb/s Metadata: encoder : Lavc55.64.100 wmav2 Stream #0:1: Video: msmpeg4v3 (msmpeg4), yuv420p, 640x480, q=2-31, 256 kb/s, 1k fps, 1000k tbn, 1k tbc Metadata: Stream mapping: Stream #0:1 -> #0:0 (adpcm_ima_wav -> wmav2) Stream #0:0 -> #0:1 (mjpeg -> msmpeg4) Press [q] to stop, [?] for help Segmentation fault I can't even forward the stream.

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >