Search Results

Search found 9335 results on 374 pages for 'extension modules'.

Page 62/374 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • How to specify generic method type parameters partly

    - by DNNX
    I have an extension method like below: public static T GetValueAs<T, R>(this IDictionary<string, R> dictionary, string fieldName) where T : R { R value; if (!dictionary.TryGetValue(fieldName, out value)) return default(T); return (T)value; } Currently, I can use it in the following way: var dictionary = new Dictionary<string, object(); //... var list = dictionary.GetValueAs<List<int, object("A"); // this may throw ClassCastException - this is expected behavior; It works pretty fine, but the second type parameter is really annoying. Is it possible in C# 4.0 rewrite GetValueAs is such a way that the method will still be applicable to different types of string-keyed dictionaries AND there will be no need to specify second type parameter in the calling code, i.e. use var list = dictionary.GetValueAs<List<int("A"); or at least something like var list = dictionary.GetValueAs<List<int, ?("A"); instead of var list = dictionary.GetValueAs<List<int, object("A");

    Read the article

  • C# Convert string to nullable type (int, double, etc...)

    - by Nathan Koop
    I am attempting to do some data conversion. Unfortunately, much of the data is in strings, where it should be int's or double, etc... So what I've got is something like: double? amount = Convert.ToDouble(strAmount); The problem with this approach is if strAmount is empty, if it's empty I want it to amount to be null, so when I add it into the database the column will be null. So I ended up writing this: double? amount = null; if(strAmount.Trim().Length>0) { amount = Convert.ToDouble(strAmount); } Now this works fine, but I now have five lines of code instead of one. This makes things a little more difficult to read, especially when I have a large amount of columns to convert. I thought I'd use an extension to the string class and generic's to pass in the type, this is because it could be a double, or an int, or a long. So I tried this: public static class GenericExtension { public static Nullable<T> ConvertToNullable<T>(this string s, T type) where T: struct { if (s.Trim().Length > 0) { return (Nullable<T>)s; } return null; } } But I get the error: Cannot convert type 'string' to 'T?' Is there a way around this? I am not very familiar with creating methods using generics.

    Read the article

  • contentscript, dynamic created iframe, postmessage

    - by thefoyer
    I'm attempting to inject an iframe from a content script. From the content script, post a message to the iframe, without much success. This is the closest I have got. No errors/warnings in the console but it doesn't work (alert test). contentscript: var iframe = document.createElement("iframe"); iframe.setAttribute("src", "https://www.com/iframe.php"); iframe.id = "iframe01"; document.getElementsByTagName("body")[0].appendChild(iframe); //then I inject this "web_accessible_resources" script var script = document.createElement("script"); script.type = "text/javascript"; script.src = chrome.extension.getURL("postMessage.js"); document.getElementsByTagName("head")[0].appendChild(script); postMessage.js window.postMessage({msg: "test"}, "*"); I've also tried top.postMessage({msg: "test"}, "*"); And var iframe = document.getElementById('iframe01'); iframe.contentWindow.postMessage({msg: "test"}, "*"); EDIT: I tried to make sure the iframe was loaded before postMessage, even if I put an alert there, it would alert telling me the iframe was loaded. var iframe = document.getElementById('iframe01'); if (ifrm_prsto.contentWindow.document) //do postMessage EDIT2: I did get it to work by moving the iframe from the contentscript to the inject.js script. Wasn't totally ideal but I do have it working now, I guess. iframe.php window.addEventListener("message", function(e) {alert("test");}); I am however able to do the reverse, talk to the parent script from the iframe.

    Read the article

  • How to make chrome.tabs.update works with content script

    - by user1673772
    I work on a little extension on Google Chrome, I want to create a new tab, go on the url "sample"+i+".com", launch a content script on this url, update the current tab to "sample"+(i+1)+".com", and launch the same script. I looked the Q&A available on stackoverflow and I google it but I didn't found a solution who works. This is my actually code of background.js (it works), it creates two tabs (i=21 and i=22) and load my content script for each url, when I tried to do a chrome.tabs.update Chrome launchs directly a tab with i = 22 (and the script works only one time) : function extraction(tab) { for (var i =21; i<23;i++) { chrome.storage.sync.set({'extraction' : 1}, function() {}); //for my content script chrome.tabs.create({url: "http://example.com/"+i+".html"}, function() {}); } } chrome.browserAction.onClicked.addListener(function(tab) {extraction(tab);}); If anyone can help me, the content script and manifest.json are not the problem. I want to make that 15000 times so I can't do otherwise. Thank you.

    Read the article

  • Handy ASP.NET MVC 2 Extension Methods &ndash; Where am I?

    - by Bobby Diaz
    Have you ever needed to detect what part of the application is currently being viewed?  This might be a bigger issue if you write a lot of shared/partial views or custom display or editor templates.  Another scenario, which is the one I encountered when I first started down this path, is when you have some type of menu and you’d like to be able to determine which item represents the current page so you can highlight it in some way.  A simple example is the menu that is created as part of the default ASP.NET MVC 2 Application template.   <div id="menucontainer">       <ul id="menu">         <li><%= Html.ActionLink("Home", "Index", "Home") %></li>         <li><%= Html.ActionLink("About", "About", "Home") %></li>     </ul>   </div>   The part that got me at first, however, was the following entry in the default style sheet (Site.css):   ul#menu li.selected a {     background-color: #fff;     color: #000; }   I assumed that the .selected class would automatically get applied to the active menu item.  After trying a few different things, including the MvcContrib MenuBuilder, I decided to write my own extension methods so I would have more control over the output.  First, I needed a way to determine what view the user has navigated to based on the requested URL and route configuration.  Now, I am sure there are many ways to do this, but this is what I came up with:   public static class RequestExtensions {     public static bool IsCurrentRoute(this RequestContext context, String areaName,         String controllerName, params String[] actionNames)     {         var routeData = context.RouteData;         var routeArea = routeData.DataTokens["area"] as String;         var current = false;           if ( ((String.IsNullOrEmpty(routeArea) && String.IsNullOrEmpty(areaName)) ||               (routeArea == areaName)) &&              ((String.IsNullOrEmpty(controllerName)) ||               (routeData.GetRequiredString("controller") == controllerName)) &&              ((actionNames == null) ||                actionNames.Contains(routeData.GetRequiredString("action"))) )         {             current = true;         }           return current;     }       // additional overloads omitted... }   With that in place, I was able to write several UrlHelper methods that check if the supplied values map to the current view.   public static class UrlExtensions {     public static bool IsCurrent(this UrlHelper urlHelper, String areaName,         String controllerName, params String[] actionNames)     {         return urlHelper.RequestContext.IsCurrentRoute(areaName, controllerName, actionNames);     }       public static string Selected(this UrlHelper urlHelper, String areaName,         String controllerName, params String[] actionNames)     {         return urlHelper.IsCurrent(areaName, controllerName, actionNames)             ? "selected" : String.Empty;     }       // additional overloads omitted... }   Now I can re-work the original menu to utilize these new methods.  Note: be sure to import the proper namespace so the extension methods become available inside your views!   <div id="menucontainer">       <ul id="menu">         <li class="<%= Url.Selected(null, "Home", "Index") %>">             <%= Html.ActionLink("Home", "Index", "Home")%></li>           <li class="<%= Url.Selected(null, "Home", "About") %>">             <%= Html.ActionLink("About", "About", "Home")%></li>     </ul>   </div>   If we take it one step further, we can clean up the markup even more.  Check out the Html.ActionMenuItem() extension method and the refined menu:   public static class HtmlExtensions {     public static MvcHtmlString ActionMenuItem(this HtmlHelper htmlHelper, String linkText,         String actionName, String controllerName)     {         var html = new StringBuilder("<li");           if ( htmlHelper.ViewContext.RequestContext                 .IsCurrentRoute(null, controllerName, actionName) )         {             html.Append(" class=\"selected\"");         }           html.Append(">")             .Append(htmlHelper.ActionLink(linkText, actionName, controllerName))             .Append("</li>");           return MvcHtmlString.Create(html.ToString());     }       // additional overloads omitted... }   <div id="menucontainer">       <ul id="menu">         <%= Html.ActionMenuItem("Home", "Index", "Home") %>         <%= Html.ActionMenuItem("About", "About", "Home") %>     </ul>   </div>   Which generates the following HTML:   <div id="menucontainer">       <ul id="menu">         <li class="selected"><a href="/">Home</a></li>         <li><a href="/Home/About">About</a></li>     </ul>   </div>     I have created a codepaste of these extension methods if you are interested in using them in your own projects.  Enjoy!

    Read the article

  • Is there an AIR native extension to use GameCenter APIs for turn-based games?

    - by Phil
    I'm planning a turn based game using the iOS 5 GameCenter (GameKit) turn-based functions. Ideally I would program the game with AIR (I'm a Flash dev), but so far I can't seem to find any already available native extension that offers that (only basic GameCenter functions), so my questions are: Does anyone know if that already exists? And secondly how complex a task would it be to create an extension that does that? Are there any pitfalls I should be aware of etc.? ** UPDATE ** There does not seem a solution to the above from Adobe. For anyone who is interested check out the Adobe Gaming SDK. It contains a Game Center ANE which I've read contains options for multiplayer but not turn-based multiplayer, at least it's a start. Comes a bit late for me as I've already learned Obj-c!

    Read the article

  • Symfony2 : How to make the php_intl extension available for Symfony2?

    - by Miles M.
    I'm trying to follow this documentation on Symfony : http://symfony.com/doc/current/book/forms.html ok so here is my thing, I've externalised my form and created a specific form class for handling the process and being able to reuse it. So what happen when I submit the form, whatever the info are okay or not for my class, I get this fatal Error : Fatal error: Call to a member function setAttribute() on a non-object in C:\Program Files (x86)\wamp\www\QNetworks\vendor\symfony\src\Symfony\Component\Form\Extension\Core\DataTransformer\NumberToLocalizedStringTransformer.php on line 130 Call Stack I'm running with php 5.3.9 and my intl extension is installed and activated BUT when I run the app/check.php command I see : [[WARNING]] Checking that the intl extension is available: FAILED * Install and enable the intl extension (used for validators) * So I don't understand what the problem with this extension. Should I reinstall it ? When I go here : http://php.net/manual/en/intl.requirements.php I see tht i can install the PECL or the ICU library, but i don't know if I should and if there is any relation with my problem .. Thank for your help !!

    Read the article

  • Possible to map a new file extension to an existing handler in ASP.NET?

    - by Dave
    I have a scenario where my application is going to be publishing services that are consumed by both PC's and mobile devices, and I have a HTTPModule that I want to only perform work on only the mobile requests. So I thought the best way of doing this was to point the mobile requests to a different file extension and have the HTTPModule decide to process only if the request targets this new extension. I don't need a custom HTTPHandler for the new extension; I want to program the services like a normal .ASMX service, just with a different extension. First, can I do this? If so, how do I do it so that requests to my new extension are handled just like .ASMX requests? Second, is this the right approach? Am I going about separating and managing the mobile vs. PC requests the wrong way? Thanks, Dave

    Read the article

  • Can certain system-hungry modules be disabled in Ubuntu?

    - by Ole Thomsen Buus
    Hi, Let me add some context: I am currently using Ubuntu 9.10 64-bit (Desktop) on a relatively powerful stationary PC (Intel Core i7 920, 12GB ram). My purpose is highspeed imaging with a pointgrey Grashopper machine-vision camera (for research, PhD project). This camera is capable of 200 fps at full VGA (640x480) resolution. The camera is connected using Firewire (1394b) and the drivers and software from Pointgrey works great. I have developed a console C++ application that can grap a certain number of frames to preallocated memory and after this also save the grapped frames to harddrive. Currently it works fine but sometimes I am observing a few framedrops (1-3). When this happens I reset the experiment and repeat the recording and usually i am lucky the second time with no framedrops (the camera-driver has a internal framecounter that I am using). Question: I usually go to tty1 and use "sudo service gdm stop" to disable the graphical frontend. It seems to release some memory though that is not my main concern. My concern is CPU resources. Are there other system hungry modules that can be disabled temporarily such that the CPU gets less busy on Ubuntu 9.10? At some point in the future I will update to 10.10. Should I perhaps option for the server edition instead? Thanks.

    Read the article

  • Firefox logs invalid URL?

    - by thanks for help
    I'm writing an extension for firefox. Using dom.location to keep track of visited search results pages, i'm getting this url http://www.google.com/search?hl=en&source=hp&q=hi&aq=f&aqi=&oq=&fp=642c18fb4411ca2e . If you click it, the google search results for "hi" should come up. You'll know that from the title bar - because the rest of the page won't load. This happens with any google search. Oddly enough, if you cut part of it off, so say, http://www.google.com/search?hl=en&source=hp&q=hi - it works! But Googling "hi" myself does give me a longish URL - http://www.google.com/#hl=en&source=hp&q=hi&aq=f&aqi=&oq=&fp=db658cc5049dc510 . I know for a fact that the first time that URL was visited, the page loaded, I did it myself. Can anyone make reason out of this? I just tried my experiment again, this time saving the original URL in the location bar. It turns out, dom.location.href is giving a different value. How is this happening? Original: http://www.google.com/#hl=en&source=hp&q=hi&aq=f&aqi=&oq=&fp=642c18fb4411ca2e dom.location.href http://www.google.com/search?hl=en&source=hp&q=hi&aq=f&aqi=&oq=&fp=642c18fb4411ca2e window.addEventListener("load", function() { myExtension.init(); }, false); var myExtension = { init: function() { var appcontent = document.getElementById("appcontent"); // browser if(appcontent) appcontent.addEventListener("DOMContentLoaded", myExtension.onPageLoad, true); var messagepane = document.getElementById("messagepane"); // mail if(messagepane) messagepane.addEventListener("load", function () { myExtension.onPageLoad(); }, true); }, onPageLoad: function(aEvent) { var doc = aEvent.originalTarget; // doc is document that triggered "onload" event // do something with the loaded page. // doc.location is a Location object (see below for a link). // You can use it to make your code executed on certain pages only. var url = doc.location.href; if (url.match(/(?:p|q)(?:=)([^%]*)/)) {alert("MATCH" + url);resultsPages.push(url);} else {alert(url); } } This snippet comes directly from Mozilla with the matching and alerts my own. I apologize for not posting the code earlier.

    Read the article

  • How can I read/write data from a file?

    - by samy
    I'm writing a simple chrome extension. I need to create the ability to add sites URLs to a list, or read from the list. I use the list to open the sites in the new tabs. I'm looking for a way to have a data file I can write to, and read from. I was thinking on XML. I read there is a problem changing the content of files with Javascript. Is XML the right choice for this kinda thing? I should add that there is no web server, and the app will run locally, so maybe the problem websites are having are not same as this. Before I wrote this question, I tried one thing, and started to feel insecure because it didn't work. I made a XML file called Site.xml: <?xml version="1.0" encoding="utf-8" ?> <Sites> <site> <url> http://www.sulamacademy.com/AddMsgForum.asp?FType=273171&SBLang=0&WSUAccess=0&LocSBID=20375 </url> </site> <site> <url> http://www.wow.co.il </url> </site> <site> <url> http://www.Google.co.il </url> </site> I made this script to read the data from him, and put in on the html file. function LoadXML() { var ajaxObj = new XMLHttpRequest(); ajaxObj.open('GET', 'Sites.xml', false); ajaxObj.send(); var myXML = ajaxObj.responseXML; document.write('<table border="2">'); var prs = myXML.getElementsByTagName("site"); for (i = 0; i < prs.length; i++) { document.write("<tr><td>"); document.write(prs[i].getElementsByName("url")[0].childNode[0].nodeValue); document.write("</td></tr>"); } document.write("</table"); }

    Read the article

  • Extending Enums, Overkill?

    - by CkH
    I have an object that needs to be serialized to an EDI format. For this example we'll say it's a car. A car might not be the best example b/c options change over time, but for the real object the Enums will never change. I have many Enums like the following with custom attributes applied. public enum RoofStyle { [DisplayText("Glass Top")] [StringValue("GTR")] Glass, [DisplayText("Convertible Soft Top")] [StringValue("CST")] ConvertibleSoft, [DisplayText("Hard Top")] [StringValue("HT ")] HardTop, [DisplayText("Targa Top")] [StringValue("TT ")] Targa, } The Attributes are accessed via Extension methods: public static string GetStringValue(this Enum value) { // Get the type Type type = value.GetType(); // Get fieldinfo for this type FieldInfo fieldInfo = type.GetField(value.ToString()); // Get the stringvalue attributes StringValueAttribute[] attribs = fieldInfo.GetCustomAttributes( typeof(StringValueAttribute), false) as StringValueAttribute[]; // Return the first if there was a match. return attribs.Length > 0 ? attribs[0].StringValue : null; } public static string GetDisplayText(this Enum value) { // Get the type Type type = value.GetType(); // Get fieldinfo for this type FieldInfo fieldInfo = type.GetField(value.ToString()); // Get the DisplayText attributes DisplayTextAttribute[] attribs = fieldInfo.GetCustomAttributes( typeof(DisplayTextAttribute), false) as DisplayTextAttribute[]; // Return the first if there was a match. return attribs.Length > 0 ? attribs[0].DisplayText : value.ToString(); } There is a custom EDI serializer that serializes based on the StringValue attributes like so: StringBuilder sb = new StringBuilder(); sb.Append(car.RoofStyle.GetStringValue()); sb.Append(car.TireSize.GetStringValue()); sb.Append(car.Model.GetStringValue()); ... There is another method that can get Enum Value from StringValue for Deserialization: car.RoofStyle = Enums.GetCode<RoofStyle>(EDIString.Substring(4, 3)) Defined as: public static class Enums { public static T GetCode<T>(string value) { foreach (object o in System.Enum.GetValues(typeof(T))) { if (((Enum)o).GetStringValue() == value.ToUpper()) return (T)o; } throw new ArgumentException("No code exists for type " + typeof(T).ToString() + " corresponding to value of " + value); } } And Finally, for the UI, the GetDisplayText() is used to show the user friendly text. What do you think? Overkill? Is there a better way? or Goldie Locks (just right)? Just want to get feedback before I intergrate it into my personal framework permanently. Thanks.

    Read the article

  • Nginx http_mp4_module seam installed but dont work

    - by Tahola
    I try to use the http_mp4_module on my Ubuntu server but that didnt seem to work at all. When i check nginx -V i get : nginx version: nginx/1.1.19 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-debug --with-http_addition_module --with-http_dav_module --with-http_flv_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_mp4_module --with-http_perl_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl --with-mail --with-mail_ssl_module --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-auth-pam --add-module=/build/buildd/nginx-1.1.19/debian/modules/chunkin-nginx-module --add-module=/build/buildd/nginx-1.1.19/debian/modules/headers-more-nginx-module --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-development-kit --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-echo --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-http-push --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-lua --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-upload-module --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-upload-progress --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-upstream-fair --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-dav-ext-module --with-http_mp4_module and --with-http_flv_module are there, I also add on sites-available/domaine.conf location ~ .mp4$ { mp4; mp4_buffer_size 4M; mp4_max_buffer_size 10M; } location ~ .flv$ { flv; } and Nginx restarted witout error, everything seem ok but when i check my urls myvideo.mp4?start=60 return a 404 error (what i think is normal) and video.mp4?starttime=60 return the video but whatever the starttime number is i get the full video from the begining, did i miss something ?

    Read the article

  • Filtering in a HierarchicalDataTemplate via MarkupExtension?

    - by Dan Bryant
    I'm trying to create a MarkupExtension to allow filtering of items in an ItemsSource of a HierarchicalDataTemplate. In particular, I'd like to be able to supply a method name that will be executed on the DataContext in order to perform the filtering. The usage syntax I'm after looks like this: <HierarchicalDataTemplate DataType="{x:Type src:DeviceBindingViewModel}" ItemsSource="{Utilities:FilterCollection {Binding Definition.Entries}, MethodName=FilterEntries}"> <StackPanel Orientation="Horizontal"> <Image Source="{StaticResource BindingImage}" Width="24" Height="24" Margin="3"/> <TextBlock Text="{Binding DisplayName}" FontSize="12" VerticalAlignment="Center"/> </StackPanel> </HierarchicalDataTemplate> My code for the custom MarkupExtension looks like this: public sealed class FilterCollectionExtension : MarkupExtension { private readonly MultiBinding _binding; private Predicate<Object> _filterMethod; public string MethodName { get; set; } public FilterCollectionExtension(Binding binding) { _binding = new MultiBinding(); _binding.Bindings.Add(binding); //We package a reference to the DataContext with the binding so that the Converter has access to it var selfBinding = new Binding {RelativeSource = RelativeSource.Self}; _binding.Bindings.Add(selfBinding); _binding.Converter = new InternalConverter(this); } public FilterCollectionExtension(Binding binding, string methodName) : this(binding) { MethodName = methodName; } public override object ProvideValue(IServiceProvider serviceProvider) { return _binding; } private bool FilterInternal(Object dataContext, Object value) { //Filtering is only applicable if a DataContext is defined if (dataContext != null) { if (_filterMethod == null) { var type = dataContext.GetType(); var method = type.GetMethod(MethodName, new[] { typeof(Object) }); if (method == null || method.ReturnType != typeof(bool)) throw new InvalidOperationException("Could not locate a filter predicate named " + MethodName + " on the DataContext"); _filterMethod = (Predicate<Object>)Delegate.CreateDelegate(typeof(Predicate<Object>), dataContext, method); } else { if (_filterMethod.Target != dataContext) { _filterMethod = (Predicate<Object>) Delegate.CreateDelegate(typeof (Predicate<Object>), dataContext, _filterMethod.Method); } } if (_filterMethod != null) return _filterMethod(value); } //If no filtering resolved, just allow all elements return true; } private class InternalConverter : IMultiValueConverter { private readonly FilterCollectionExtension _owner; public InternalConverter(FilterCollectionExtension owner) { _owner = owner; } public object Convert(object[] values, Type targetType, object parameter, System.Globalization.CultureInfo culture) { var enumerable = values[0]; var targetElement = (FrameworkElement)values[1]; var view = CollectionViewSource.GetDefaultView(enumerable); view.Filter = item => _owner.FilterInternal(targetElement.DataContext, item); return view; } public object[] ConvertBack(object value, Type[] targetTypes, object parameter, System.Globalization.CultureInfo culture) { throw new NotSupportedException("Cannot convert back"); } } } I can see that the extension is instantiated and I can see it return the MultiBinding that is used by the Template. I also see the call to the InternalConverter.Convert method, which sees the expected parameters (I see the collection provided by the nested {Binding}) and is successfully able to retrieve the ICollectionView for the incoming collection. The only problem is that FilterInternal never gets called. The template is ultimately being used by a TreeView, if that's relevant. I haven't been able to figure out why the FilterInternal method is not being called and I was hoping somebody might be able to offer some insight.

    Read the article

  • Confusion on C++ Python extensions. Things like getting C++ values for python values.

    - by Matthew Mitchell
    I'm wanted to convert some of my python code to C++ for speed but it's not as easy as simply making a C++ function and making a few function calls. I have no idea how to get a C++ integer from a python integer object. I have an integer which is an attribute of an object that I want to use. I also have integers which are inside a list in the object which I need to use. I wanted to test making a C++ extension with this function: def setup_framebuffer(surface,flip=False): #Create texture if not done already if surface.texture is None: create_texture(surface) #Render child to parent if surface.frame_buffer is None: surface.frame_buffer = glGenFramebuffersEXT(1) glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, c_uint(int(surface.frame_buffer))) glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, surface.texture, 0) glPushAttrib(GL_VIEWPORT_BIT) glViewport(0,0,surface._scale[0],surface._scale[1]) glMatrixMode(GL_PROJECTION) glLoadIdentity() #Load the projection matrix if flip: gluOrtho2D(0,surface._scale[0],surface._scale[1],0) else: gluOrtho2D(0,surface._scale[0],0,surface._scale[1]) That function calls create_texture, so I will have to pass that function to the C++ function which I will do with the third argument. This is what I have so far, while trying to follow information on the python documentation: #include <Python.h> #include <GL/gl.h> static PyMethodDef SpamMethods[] = { ... {"setup_framebuffer", setup_framebuffer, METH_VARARGS,"Loads a texture from a Surface object to the OpenGL framebuffer."}, ... {NULL, NULL, 0, NULL} /* Sentinel */ }; static PyObject * setup_framebuffer(PyObject *self, PyObject *args){ bool flip; PyObject *create_texture, *arg_list,*pyflip,*frame_buffer_id; if (!PyArg_ParseTuple(args, "OOO", &surface,&pyflip,&create_texture)){ return NULL; } if (PyObject_IsTrue(pyflip) == 1){ flip = true; }else{ flip = false; } Py_XINCREF(create_texture); //Create texture if not done already if(texture == NULL){ arglist = Py_BuildValue("(O)", surface) result = PyEval_CallObject(create_texture, arglist); Py_DECREF(arglist); if (result == NULL){ return NULL; } Py_DECREF(result); } Py_XDECREF(create_texture); //Render child to parent frame_buffer_id = PyObject_GetAttr(surface, Py_BuildValue("s","frame_buffer")) if(surface.frame_buffer == NULL){ glGenFramebuffersEXT(1,frame_buffer_id); } glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, surface.frame_buffer)); glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, surface.texture, 0); glPushAttrib(GL_VIEWPORT_BIT); glViewport(0,0,surface._scale[0],surface._scale[1]); glMatrixMode(GL_PROJECTION); glLoadIdentity(); //Load the projection matrix if (flip){ gluOrtho2D(0,surface._scale[0],surface._scale[1],0); }else{ gluOrtho2D(0,surface._scale[0],0,surface._scale[1]); } Py_INCREF(Py_None); return Py_None; } PyMODINIT_FUNC initcscalelib(void){ PyObject *module; module = Py_InitModule("cscalelib", Methods); if (m == NULL){ return; } } int main(int argc, char *argv[]){ /* Pass argv[0] to the Python interpreter */ Py_SetProgramName(argv[0]); /* Initialize the Python interpreter. Required. */ Py_Initialize(); /* Add a static module */ initscalelib(); }

    Read the article

  • 12.04 making BCM4313 card work with aircrack-ng?

    - by Charles Forest
    I'm a real Linux Noob, just started using it (this month) and until now i had no issues. now i'm trying to set-up aircrack-ng on my laptop, but it seems like it's using the worst card possible (or almost) there is a TON of tutorial on this card (seems to be hell to set-up) i have tryed some, but i ended up uninstalling my drivers, messing with my desktops, and ended by having no more "X" to close my windows (i have no clue how i ended there) i just re-installed my linux (took me 2 hours to setup everything again), but now i'm a bit "Scared" to try tutorials randomly again. Right now it says the driver is wl, wich is not the one i want (AFAIK it's not supported) i'm not sure what kind of informations are needed, but here's what i think could be usefull. lspci -knn 00:00.0 Host bridge [0600]: Intel Corporation 2nd Generation Core Processor Family DRAM Controller [8086:0104] (rev 09) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: agpgart-intel 00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port [8086:0101] (rev 09) Kernel driver in use: pcieport Kernel modules: shpchp 00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller [8086:0116] (rev 09) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: i915 Kernel modules: i915 00:16.0 Communication controller [0780]: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 [8086:1c3a] (rev 04) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: mei Kernel modules: mei 00:1a.0 USB controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 [8086:1c2d] (rev 04) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: ehci_hcd 00:1b.0 Audio device [0403]: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller [8086:1c20] (rev 04) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 [8086:1c10] (rev b4) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.3 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 [8086:1c16] (rev b4) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.4 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 [8086:1c18] (rev b4) Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 [8086:1c26] (rev 04) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: ehci_hcd 00:1f.0 ISA bridge [0601]: Intel Corporation HM65 Express Chipset Family LPC Controller [8086:1c49] (rev 04) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel modules: iTCO_wdt 00:1f.2 SATA controller [0106]: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller [8086:1c03] (rev 04) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: ahci 00:1f.3 SMBus [0c05]: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller [8086:1c22] (rev 04) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel modules: i2c-i801 01:00.0 3D controller [0302]: NVIDIA Corporation GF108 [GeForce GT 540M] [10de:0df4] (rev a1) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: nouveau Kernel modules: nouveau, nvidiafb WIRELESS CARD 02:00.0 Network controller [0280]: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller [14e4:4727] (rev 01) Subsystem: Wistron NeWeb Corp. Device [185f:051a] Kernel driver in use: wl Kernel modules: wl, bcma, brcmsmac REST... 03:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller [10ec:8168] (rev 06) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: r8169 Kernel modules: r8169 04:00.0 USB controller [0c03]: NEC Corporation uPD720200 USB 3.0 Host Controller [1033:0194] (rev 04) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: xhci_hcd Also, if i'm "screwed" with my hardware, just tell me.

    Read the article

  • What IPC method should I use between Firefox extension and C# code running on the same machine?

    - by Rory
    I have a question about how to structure communication between a (new) Firefox extension and existing C# code. The firefox extension will use configuration data and will produce other data, so needs to get the config data from somewhere and save it's output somewhere. The data is produced/consumed by existing C# code, so I need to decide how the extension should interact with the C# code. Some pertinent factors: It's only running on windows, in a relatively controlled corporate environment. I have a windows service running on the machine, built in C#. Storing the data in a local datastore (like sqlite) would be useful for other reasons. The volume of data is low, e.g. 10kb of uncompressed xml every few minutes, and isn't very 'chatty'. The data exchange can be asynchronous for the most part if not completely. As with all projects, I have limited resources so want an option that's relatively easy. It doesn't have to be ultra-high performance, but shouldn't add significant overhead. I'm planning on building the extension in javascript (although could be convinced otherwise if really necessary) Some options I'm considering: use an XPCOM to .NET/COM bridge use a sqlite db: the extension would read from and save to it. The c# code would run in the service, populating the db and then processing data created by the service. use TCP sockets to communicate between the extension and the service. Let the service manage a local data store. My problem with (1) is I think this will be tricky and not so easy. But I could be completely wrong? The main problem I see with (2) is the locking of sqlite: only a single process can write data at a time so there'd be some blocking. However, it would be nice generally to have a local datastore so this is an attractive option if the performance impact isn't too great. I don't know whether (3) would be particularly easy or hard ... or what approach to take on the protocol: something custom or http. Any comments on these ideas or other suggestions? UPDATE: I was planning on building the extension in javascript rather than c++

    Read the article

  • Audio Recording with Appcelerator on Android

    - by user951793
    I would like to record audio and then send the file to a webserver. I am using Titanium 1.8.2 on Win7. The application I am woring on is both for Android and iphone and I do realise that Titanium.Media.AudioRecorder and Titanium.Media.AudioPlayer are for these purpose. Let's concentrate on android for a while. On that platform you can achieve audio recording by creating an intent and then you handle the file in your application. See more here. This implementation has a couple of drawbacks: You cannot stay in your application (as a native audio recorder will start up) You only get back an uri from the recorder and not the actual file. Another implementation is done by Codeboxed. This module is for recording an audio without using intents. The only problem that I could not get this working (along with other people) and the codeboxed team does not respond to anyone since last year. So my question is: Do you know how to record audio on android without using an intent? Thanks in advance. Edit: My problem with codeboxed's module: I downloaded the module from here. I copied the zip file into my project directory. I edited my manifest file with: <modules> <module platform="android" version="0.1">com.codeboxed.audiorecorder</module> </modules> When I try and compile I receive the following error: [DEBUG] appending module: com.mwaysolutions.barcode.TitaniumBarcodeModule [DEBUG] module_id = com.codeboxed.audiorecorder [ERROR] The 'apiversion' for 'com.codeboxed.audiorecorder' in the module manifest is not a valid value. Please use a version of the module that has an 'apiversion' value of 2 or greater set in it's manifest file [DEBUG] touching tiapp.xml to force rebuild next time: E:\TitaniumProjects\MyProject\tiapp.xml I can manage to recognise the module by editing the module's manifest file to this: ` version: 0.1 description: My module author: Your Name license: Specify your license copyright: Copyright (c) 2011 by Your Company apiversion: 2 name: audiorecorder moduleid: com.codeboxed.audiorecorder guid: 747dce68-7d2d-426a-a527-7c67f4e9dfad platform: android minsdk: 1.7.0` But Then again I receive error on compiling: [DEBUG] "C:\Program Files\Java\jdk1.6.0_21\bin\javac.exe" -encoding utf8 -classpath "C:\Program Files (x86)\Android\android-sdk\platforms\android-8\android.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\modules\titanium-media.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\modules\titanium-platform.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\titanium.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\thirdparty.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\jaxen-1.1.1.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\modules\titanium-locale.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\modules\titanium-app.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\modules\titanium-gesture.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\modules\titanium-analytics.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\kroll-common.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\modules\titanium-network.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\ti-commons-codec-1.3.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\modules\titanium-ui.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\modules\titanium-database.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\kroll-v8.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\modules\titanium-xml.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\android-support-v4.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\modules\titanium-filesystem.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\modules\titanium-android.jar;E:\TitaniumProjects\MyProject\modules\android\com.mwaysolutions.barcode\0.3\barcode.jar;E:\TitaniumProjects\MyProject\modules\android\com.mwaysolutions.barcode\0.3\lib\zxing.jar;E:\TitaniumProjects\MyProject\modules\android\com.codeboxed.audiorecorder\0.1\audiorecorder.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\kroll-apt.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\lib\titanium-verify.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\lib\titanium-debug.jar" -d E:\TitaniumProjects\MyProject\build\android\bin\classes -proc:none -sourcepath E:\TitaniumProjects\MyProject\build\android\src -sourcepath E:\TitaniumProjects\MyProject\build\android\gen @c:\users\gabor\appdata\local\temp\tmpbqmjuy [ERROR] Error(s) compiling generated Java code [ERROR] E:\TitaniumProjects\MyProject\build\android\gen\com\petosoft\myproject\MyProjectApplication.java:44: cannot find symbol symbol : class AudiorecorderBootstrap location: package com.codeboxed.audiorecorder runtime.addExternalModule("com.codeboxed.audiorecorder", com.codeboxed.audiorecorder.AudiorecorderBootstrap.class); ^ 1 error

    Read the article

  • Scaling-out Your Services by Message Bus based WCF Transport Extension &ndash; Part 1 &ndash; Background

    - by Shaun
    Cloud computing gives us more flexibility on the computing resource, we can provision and deploy an application or service with multiple instances over multiple machines. With the increment of the service instances, how to balance the incoming message and workload would become a new challenge. Currently there are two approaches we can use to pass the incoming messages to the service instances, I would like call them dispatcher mode and pulling mode.   Dispatcher Mode The dispatcher mode introduces a role which takes the responsible to find the best service instance to process the request. The image below describes the sharp of this mode. There are four clients communicate with the service through the underlying transportation. For example, if we are using HTTP the clients might be connecting to the same service URL. On the server side there’s a dispatcher listening on this URL and try to retrieve all messages. When a message came in, the dispatcher will find a proper service instance to process it. There are three mechanism to find the instance: Round-robin: Dispatcher will always send the message to the next instance. For example, if the dispatcher sent the message to instance 2, then the next message will be sent to instance 3, regardless if instance 3 is busy or not at that moment. Random: Dispatcher will find a service instance randomly, and same as the round-robin mode it regardless if the instance is busy or not. Sticky: Dispatcher will send all related messages to the same service instance. This approach always being used if the service methods are state-ful or session-ful. But as you can see, all of these approaches are not really load balanced. The clients will send messages at any time, and each message might take different process duration on the server side. This means in some cases, some of the service instances are very busy while others are almost idle. For example, if we were using round-robin mode, it could be happened that most of the simple task messages were passed to instance 1 while the complex ones were sent to instance 3, even though instance 1 should be idle. This brings some problem in our architecture. The first one is that, the response to the clients might be longer than it should be. As it’s shown in the figure above, message 6 and 9 can be processed by instance 1 or instance 2, but in reality they were dispatched to the busy instance 3 since the dispatcher and round-robin mode. Secondly, if there are many requests came from the clients in a very short period, service instances might be filled by tons of pending tasks and some instances might be crashed. Third, if we are using some cloud platform to host our service instances, for example the Windows Azure, the computing resource is billed by service deployment period instead of the actual CPU usage. This means if any service instance is idle it is wasting our money! Last one, the dispatcher would be the bottleneck of our system since all incoming messages must be routed by the dispatcher. If we are using HTTP or TCP as the transport, the dispatcher would be a network load balance. If we wants more capacity, we have to scale-up, or buy a hardware load balance which is very expensive, as well as scaling-out the service instances. Pulling Mode Pulling mode doesn’t need a dispatcher to route the messages. All service instances are listening to the same transport and try to retrieve the next proper message to process if they are idle. Since there is no dispatcher in pulling mode, it requires some features on the transportation. The transportation must support multiple client connection and server listening. HTTP and TCP doesn’t allow multiple clients are listening on the same address and port, so it cannot be used in pulling mode directly. All messages in the transportation must be FIFO, which means the old message must be received before the new one. Message selection would be a plus on the transportation. This means both service and client can specify some selection criteria and just receive some specified kinds of messages. This feature is not mandatory but would be very useful when implementing the request reply and duplex WCF channel modes. Otherwise we must have a memory dictionary to store the reply messages. I will explain more about this in the following articles. Message bus, or the message queue would be best candidate as the transportation when using the pulling mode. First, it allows multiple application to listen on the same queue, and it’s FIFO. Some of the message bus also support the message selection, such as TIBCO EMS, RabbitMQ. Some others provide in memory dictionary which can store the reply messages, for example the Redis. The principle of pulling mode is to let the service instances self-managed. This means each instance will try to retrieve the next pending incoming message if they finished the current task. This gives us more benefit and can solve the problems we met with in the dispatcher mode. The incoming message will be received to the best instance to process, which means this will be very balanced. And it will not happen that some instances are busy while other are idle, since the idle one will retrieve more tasks to make them busy. Since all instances are try their best to be busy we can use less instances than dispatcher mode, which more cost effective. Since there’s no dispatcher in the system, there is no bottleneck. When we introduced more service instances, in dispatcher mode we have to change something to let the dispatcher know the new instances. But in pulling mode since all service instance are self-managed, there no extra change at all. If there are many incoming messages, since the message bus can queue them in the transportation, service instances would not be crashed. All above are the benefits using the pulling mode, but it will introduce some problem as well. The process tracking and debugging become more difficult. Since the service instances are self-managed, we cannot know which instance will process the message. So we need more information to support debug and track. Real-time response may not be supported. All service instances will process the next message after the current one has done, if we have some real-time request this may not be a good solution. Compare with the Pros and Cons above, the pulling mode would a better solution for the distributed system architecture. Because what we need more is the scalability, cost-effect and the self-management.   WCF and WCF Transport Extensibility Windows Communication Foundation (WCF) is a framework for building service-oriented applications. In the .NET world WCF is the best way to implement the service. In this series I’m going to demonstrate how to implement the pulling mode on top of a message bus by extending the WCF. I don’t want to deep into every related field in WCF but will highlight its transport extensibility. When we implemented an RPC foundation there are many aspects we need to deal with, for example the message encoding, encryption, authentication and message sending and receiving. In WCF, each aspect is represented by a channel. A message will be passed through all necessary channels and finally send to the underlying transportation. And on the other side the message will be received from the transport and though the same channels until the business logic. This mode is called “Channel Stack” in WCF, and the last channel in the channel stack must always be a transport channel, which takes the responsible for sending and receiving the messages. As we are going to implement the WCF over message bus and implement the pulling mode scaling-out solution, we need to create our own transport channel so that the client and service can exchange messages over our bus. Before we deep into the transport channel, let’s have a look on the message exchange patterns that WCF defines. Message exchange pattern (MEP) defines how client and service exchange the messages over the transportation. WCF defines 3 basic MEPs which are datagram, Request-Reply and Duplex. Datagram: Also known as one-way, or fire-forgot mode. The message sent from the client to the service, and no need any reply from the service. The client doesn’t care about the message result at all. Request-Reply: Very common used pattern. The client send the request message to the service and wait until the reply message comes from the service. Duplex: The client sent message to the service, when the service processing the message it can callback to the client. When callback the service would be like a client while the client would be like a service. In WCF, each MEP represent some channels associated. MEP Channels Datagram IInputChannel, IOutputChannel Request-Reply IRequestChannel, IReplyChannel Duplex IDuplexChannel And the channels are created by ChannelListener on the server side, and ChannelFactory on the client side. The ChannelListener and ChannelFactory are created by the TransportBindingElement. The TransportBindingElement is created by the Binding, which can be defined as a new binding or from a custom binding. For more information about the transport channel mode, please refer to the MSDN document. The figure below shows the transport channel objects when using the request-reply MEP. And this is the datagram MEP. And this is the duplex MEP. After investigated the WCF transport architecture, channel mode and MEP, we finally identified what we should do to extend our message bus based transport layer. They are: Binding: (Optional) Defines the channel elements in the channel stack and added our transport binding element at the bottom of the stack. But we can use the build-in CustomBinding as well. TransportBindingElement: Defines which MEP is supported in our transport and create the related ChannelListener and ChannelFactory. This also defines the scheme of the endpoint if using this transport. ChannelListener: Create the server side channel based on the MEP it’s. We can have one ChannelListener to create channels for all supported MEPs, or we can have ChannelListener for each MEP. In this series I will use the second approach. ChannelFactory: Create the client side channel based on the MEP it’s. We can have one ChannelFactory to create channels for all supported MEPs, or we can have ChannelFactory for each MEP. In this series I will use the second approach. Channels: Based on the MEPs we want to support, we need to implement the channels accordingly. For example, if we want our transport support Request-Reply mode we should implement IRequestChannel and IReplyChannel. In this series I will implement all 3 MEPs listed above one by one. Scaffold: In order to make our transport extension works we also need to implement some scaffold stuff. For example we need some classes to send and receive message though out message bus. We also need some codes to read and write the WCF message, etc.. These are not necessary but would be very useful in our example.   Message Bus There is only one thing remained before we can begin to implement our scaling-out support WCF transport, which is the message bus. As I mentioned above, the message bus must have some features to fulfill all the WCF MEPs. In my company we will be using TIBCO EMS, which is an enterprise message bus product. And I have said before we can use any message bus production if it’s satisfied with our requests. Here I would like to introduce an interface to separate the message bus from the WCF. This allows us to implement the bus operations by any kinds bus we are going to use. The interface would be like this. 1: public interface IBus : IDisposable 2: { 3: string SendRequest(string message, bool fromClient, string from, string to = null); 4:  5: void SendReply(string message, bool fromClient, string replyTo); 6:  7: BusMessage Receive(bool fromClient, string replyTo); 8: } There are only three methods for the bus interface. Let me explain one by one. The SendRequest method takes the responsible for sending the request message into the bus. The parameters description are: message: The WCF message content. fromClient: Indicates if this message was came from the client. from: The channel ID that this message was sent from. The channel ID will be generated when any kinds of channel was created, which will be explained in the following articles. to: The channel ID that this message should be received. In Request-Reply and Duplex MEP this is necessary since the reply message must be received by the channel which sent the related request message. The SendReply method takes the responsible for sending the reply message. It’s very similar as the previous one but no “from” parameter. This is because it’s no need to reply a reply message again in any MEPs. The Receive method takes the responsible for waiting for a incoming message, includes the request message and specified reply message. It returned a BusMessage object, which contains some information about the channel information. The code of the BusMessage class is 1: public class BusMessage 2: { 3: public string MessageID { get; private set; } 4: public string From { get; private set; } 5: public string ReplyTo { get; private set; } 6: public string Content { get; private set; } 7:  8: public BusMessage(string messageId, string fromChannelId, string replyToChannelId, string content) 9: { 10: MessageID = messageId; 11: From = fromChannelId; 12: ReplyTo = replyToChannelId; 13: Content = content; 14: } 15: } Now let’s implement a message bus based on the IBus interface. Since I don’t want you to buy and install the TIBCO EMS or any other message bus products, I will implement an in process memory bus. This bus is only for test and sample purpose. It can only be used if the service and client are in the same process. Very straightforward. 1: public class InProcMessageBus : IBus 2: { 3: private readonly ConcurrentDictionary<Guid, InProcMessageEntity> _queue; 4: private readonly object _lock; 5:  6: public InProcMessageBus() 7: { 8: _queue = new ConcurrentDictionary<Guid, InProcMessageEntity>(); 9: _lock = new object(); 10: } 11:  12: public string SendRequest(string message, bool fromClient, string from, string to = null) 13: { 14: var entity = new InProcMessageEntity(message, fromClient, from, to); 15: _queue.TryAdd(entity.ID, entity); 16: return entity.ID.ToString(); 17: } 18:  19: public void SendReply(string message, bool fromClient, string replyTo) 20: { 21: var entity = new InProcMessageEntity(message, fromClient, null, replyTo); 22: _queue.TryAdd(entity.ID, entity); 23: } 24:  25: public BusMessage Receive(bool fromClient, string replyTo) 26: { 27: InProcMessageEntity e = null; 28: while (true) 29: { 30: lock (_lock) 31: { 32: var entity = _queue 33: .Where(kvp => kvp.Value.FromClient == fromClient && (kvp.Value.To == replyTo || string.IsNullOrWhiteSpace(kvp.Value.To))) 34: .FirstOrDefault(); 35: if (entity.Key != Guid.Empty && entity.Value != null) 36: { 37: _queue.TryRemove(entity.Key, out e); 38: } 39: } 40: if (e == null) 41: { 42: Thread.Sleep(100); 43: } 44: else 45: { 46: return new BusMessage(e.ID.ToString(), e.From, e.To, e.Content); 47: } 48: } 49: } 50:  51: public void Dispose() 52: { 53: } 54: } The InProcMessageBus stores the messages in the objects of InProcMessageEntity, which can take some extra information beside the WCF message itself. 1: public class InProcMessageEntity 2: { 3: public Guid ID { get; set; } 4: public string Content { get; set; } 5: public bool FromClient { get; set; } 6: public string From { get; set; } 7: public string To { get; set; } 8:  9: public InProcMessageEntity() 10: : this(string.Empty, false, string.Empty, string.Empty) 11: { 12: } 13:  14: public InProcMessageEntity(string content, bool fromClient, string from, string to) 15: { 16: ID = Guid.NewGuid(); 17: Content = content; 18: FromClient = fromClient; 19: From = from; 20: To = to; 21: } 22: }   Summary OK, now I have all necessary stuff ready. The next step would be implementing our WCF message bus transport extension. In this post I described two scaling-out approaches on the service side especially if we are using the cloud platform: dispatcher mode and pulling mode. And I compared the Pros and Cons of them. Then I introduced the WCF channel stack, channel mode and the transport extension part, and identified what we should do to create our own WCF transport extension, to let our WCF services using pulling mode based on a message bus. And finally I provided some classes that need to be used in the future posts that working against an in process memory message bus, for the demonstration purpose only. In the next post I will begin to implement the transport extension step by step.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Avoiding duplication in setting properties on the task in Rake tasks

    - by Stray
    I have a bunch of rake building tasks. They each have unique input / output properties, but the majority of the properties I set on the tasks are the same each time. Currently I'm doing that via simple repetition like this: task :buildThisModule => "bin/modules/thisModule.swf" mxmlc "bin/modules/thisModule.swf" do |t| t.input = "src/project/modules/ThisModule.as" t.prop1 = value1 t.prop2 = value2 ... (And many more property=value sets that are the same in each task) end task :buildThatModule => "bin/modules/thatModule.swf" mxmlc "bin/modules/thatModule.swf" do |t| t.input = "src/project/modules/ThatModule.as" t.prop1 = value1 t.prop2 = value2 ... (And many more property=value sets that are the same in each task) end In my usual programming headspace I'd expect to be able to break out the population of the recurring task properties to a re-usable function. Is there a rake analogy for this? Some way I can have a single function where the shared properties are set on any task? Something equivalent to: task :buildThisModule => "bin/modules/thisModule.swf" mxmlc "bin/modules/thisModule.swf" do |t| addCommonTaskParameters(t) t.input = "src/project/modules/ThisModule.as" end task :buildThatModule => "bin/modules/thatModule.swf" mxmlc "bin/modules/thatModule.swf" do |t| addCommonTaskParameters(t) t.input = "src/project/modules/ThatModule.as" end Thanks.

    Read the article

  • Vue d'ensemble de l'architecture modulaire de Qt 5, un billet de Guillaume Belz

    La sortie de Qt 5 se précise de jour en jour. L'une des principales évolutions de Qt 5 est la réorganisation des différents modules. Certaines fonctionnalités sont séparées dans des modules indépendants, comme le transfert des widgets depuis QtGui vers QtWidget, ou déplacées dans des modules existants, comme l'intégration des fonctionnalités d'OpenGL depuis QtOpenGL vers QtGui. Ce billet de blog présente l'ensemble des modules de Qt 5 et les principaux changement que l'on y trouvera. Les modules de Qt 5

    Read the article

  • Wireless Problem on HP Pavillion G6

    - by user47954
    I have a broadcom wireless card in my laptop and the wireless is not working correctly. right in front of the router the wireless signal is 70% and across the room it barely works and in another room it disconnects. i have the drivers and everything. i am running ubuntu 11.10 64bit.it works perfectly in windows 7. can anyone help! 00:00.0 Host bridge [0600]: Advanced Micro Devices [AMD] Device [1022:1705] 00:01.0 VGA compatible controller [0300]: ATI Technologies Inc Device [1002:9649] 00:01.1 Audio device [0403]: ATI Technologies Inc Device [1002:1714] Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel 00:04.0 PCI bridge [0604]: Advanced Micro Devices [AMD] Device [1022:1709] Kernel driver in use: pcieport Kernel modules: shpchp 00:11.0 SATA controller [0106]: Advanced Micro Devices [AMD] Device [1022:7804] Kernel driver in use: ahci Kernel modules: ahci 00:12.0 USB Controller [0c03]: Advanced Micro Devices [AMD] Device [1022:7807] (rev 11) Kernel driver in use: ohci_hcd 00:12.2 USB Controller [0c03]: Advanced Micro Devices [AMD] Device [1022:7808] (rev 11) Kernel driver in use: ehci_hcd 00:13.0 USB Controller [0c03]: Advanced Micro Devices [AMD] Device [1022:7807] (rev 11) Kernel driver in use: ohci_hcd 00:13.2 USB Controller [0c03]: Advanced Micro Devices [AMD] Device [1022:7808] (rev 11) Kernel driver in use: ehci_hcd 00:14.0 SMBus [0c05]: Advanced Micro Devices [AMD] Device [1022:780b] (rev 13) Kernel modules: i2c-piix4 00:14.2 Audio device [0403]: Advanced Micro Devices [AMD] Device [1022:780d] (rev 01) Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel 00:14.3 ISA bridge [0601]: Advanced Micro Devices [AMD] Device [1022:780e] (rev 11) 00:14.4 PCI bridge [0604]: Advanced Micro Devices [AMD] Device [1022:780f] (rev 40) 00:15.0 PCI bridge [0604]: Advanced Micro Devices [AMD] Device [1022:43a0] Kernel driver in use: pcieport Kernel modules: shpchp 00:15.1 PCI bridge [0604]: Advanced Micro Devices [AMD] Device [1022:43a1] Kernel driver in use: pcieport Kernel modules: shpchp 00:15.2 PCI bridge [0604]: Advanced Micro Devices [AMD] Device [1022:43a2] Kernel driver in use: pcieport Kernel modules: shpchp 00:18.0 Host bridge [0600]: Advanced Micro Devices [AMD] Device [1022:1700] (rev 43) 00:18.1 Host bridge [0600]: Advanced Micro Devices [AMD] Device [1022:1701] 00:18.2 Host bridge [0600]: Advanced Micro Devices [AMD] Device [1022:1702] 00:18.3 Host bridge [0600]: Advanced Micro Devices [AMD] Device [1022:1703] 00:18.4 Host bridge [0600]: Advanced Micro Devices [AMD] Device [1022:1704] 00:18.5 Host bridge [0600]: Advanced Micro Devices [AMD] Device [1022:1718] 00:18.6 Host bridge [0600]: Advanced Micro Devices [AMD] Device [1022:1716] 00:18.7 Host bridge [0600]: Advanced Micro Devices [AMD] Device [1022:1719] 01:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 05) Kernel driver in use: r8169 Kernel modules: r8169 07:00.0 Network controller [0280]: Broadcom Corporation Device [14e4:4727] (rev 01) Kernel driver in use: wl Kernel modules: wl 08:00.0 Class [ff00]: Realtek Semiconductor Co., Ltd. Device [10ec:5209] (rev 01)

    Read the article

  • Ubuntu 12.10 graphics does not work properly

    - by madox2
    My graphic on ubuntu 12.10 does not work as well as on 12.04. After upgrade I installed driver sudo apt-add-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update sudo apt-get install nvidia-current for my Nvidia 450 GTS graphics card. But sometimes I see slight lag on my videos played in VLC player, some of desktop and window effects are lagging, sometimes I can see an indescribable souce of pixels on my screen at the start of ubuntu and so on. I feel difference between 12.04 and 12.10 in favour of former version. Does anyone know whats wrong or what I am missing? here is output of lspci -k: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) Kernel driver in use: pcieport Kernel modules: shpchp 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) Subsystem: Giga-byte Technology Device 1c3a Kernel driver in use: mei Kernel modules: mei 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) Subsystem: Giga-byte Technology Device 5006 Kernel driver in use: ehci_hcd 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) Subsystem: Giga-byte Technology Device a000 Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.4 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 (rev b5) Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) Subsystem: Giga-byte Technology Device 5006 Kernel driver in use: ehci_hcd 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a5) 00:1f.0 ISA bridge: Intel Corporation H61 Express Chipset Family LPC Controller (rev 05) Subsystem: Giga-byte Technology Device 5001 Kernel driver in use: lpc_ich Kernel modules: lpc_ich 00:1f.2 IDE interface: Intel Corporation 6 Series/C200 Series Chipset Family 4 port SATA IDE Controller (rev 05) Subsystem: Giga-byte Technology Device b002 Kernel driver in use: ata_piix 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) Subsystem: Giga-byte Technology Device 5001 Kernel modules: i2c-i801 00:1f.5 IDE interface: Intel Corporation 6 Series/C200 Series Chipset Family 2 port SATA IDE Controller (rev 05) Subsystem: Giga-byte Technology Device b002 Kernel driver in use: ata_piix 01:00.0 VGA compatible controller: NVIDIA Corporation GF116 [GeForce GTS 450] (rev a1) Subsystem: CardExpert Technology Device 0401 Kernel driver in use: nvidia Kernel modules: nvidia_current, nouveau, nvidiafb 01:00.1 Audio device: NVIDIA Corporation GF116 High Definition Audio Controller (rev a1) Subsystem: CardExpert Technology Device 0401 Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 03:00.0 Ethernet controller: Atheros Communications Inc. AR8151 v2.0 Gigabit Ethernet (rev c0) Subsystem: Giga-byte Technology Device e000 Kernel driver in use: atl1c Kernel modules: atl1c

    Read the article

  • How to ship numpy with web2py application under myapp/modules?

    - by Newbie07
    I am having the following error while importing numpy from application/myapp/modules: Traceback (most recent call last): File "/home/mdipierro/make_web2py/web2py/gluon/restricted.py", line 212, in restricted File "D:/web2py_win/web2py/applications/myapp/controllers/default.py", line 13, in File "/home/mdipierro/make_web2py/web2py/gluon/custom_import.py", line 100, in custom_importer File "applications\myapp\modules\numpy\ __init__.py", line 137, in File "/home/mdipierro/make_web2py/web2py/gluon/custom_import.py", line 81, in custom_importer ImportError: Cannot import module 'add_newdocs' I tried adding 'application.myapp.modules.' in the 'import add_newdocs' statement of numpy\ __init.py__ and the error propagates to other subsequent imports(i.e. add_docs imports some other stuff and I get the ImportError again for these imports). So I narrowed down the problem to the "working directory" of the import statement. However, I do not wish to add 'application.myapp.modules.' in every import statement inside the package since it would be impractical and hard to edit if someone decides to rename the app later on. How do I make the import work smoothly? NOTE: It is necessary for me to put the numpy package in the app to ensure ease of deployment.

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >