Search Results

Search found 13889 results on 556 pages for 'results'.

Page 166/556 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • Wireless device bug on 13.10. BCM4313 registers as eth1 instead of wlan0 and no internet access

    - by user205691
    My Hotel wiFi requires me to login with a username & password after connecting to the hotspot. So, my browser would open a page with username & passwrd fields to login and then connect to internet. But unfortunately, firefox & chromium dont seem to work. i dont think it is browser related but a setting for the wifi router or driver which is creating this issue. using Broadcom 801.11 STA wireless driver (proprietary). tried open source as well but same result !! The image linked below shows my wifi connection setting & Chromium. The login page itself comes up after a long time and after entering the credentials, it keeps loading for ever !! it is the same case for every other browser.. so i dont think its browser issue but something to do with wifi setting or network manager stuff.. interestingly, i am able to connect to WiFi networks with WPA key without any issue. Adhoc hotspot is a problem and that is my regular home network :( .. I hope i can get some help solving this issue ! I have tried repeating the same hotspot after login from my android, by creating a virtual repeater with WPA key and it works. I can browse on ubuntu using this method.. but cant be doing this regularly ! I tried loading the same login page of the hotel wifi while browsing through my repeater wifi created on mobile and screen shot attached below. the page loads up quick and easy.. so this means something is wrong with the way network manager handles adhoc connectivity & login ?? i installed wicd0 but it crashes on startup and not helpful at all ! Screenshot of Chromium page Login page with repeated hotspot ifconfig in my terminal results: krishna@krishna-HP-ENVY-4-Notebook-PC:~$ ifconfig eth0 Link encap:Ethernet HWaddr 28:92:4a:1d:54:fa UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth1 Link encap:Ethernet HWaddr e0:06:e6:89:fa:49 inet addr:10.24.1.71 Bcast:10.24.1.255 Mask:255.255.255.0 inet6 addr: fe80::e206:e6ff:fe89:fa49/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:10940 errors:0 dropped:0 overruns:0 frame:348431 TX packets:6611 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7669631 (7.6 MB) TX bytes:864195 (864.1 KB) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:2146 errors:0 dropped:0 overruns:0 frame:0 TX packets:2146 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:166120 (166.1 KB) TX bytes:166120 (166.1 KB) I wonder why is the wireless configured under eth1 ? I think this is a bug with earlier ubuntu versions, but is this normal in 13.10 or is there a wrong configuration here ? The wireless device in my pc is BCM4313 and i have installed the bcmwl-kernel-sources, wireless-tools to support the device. i also reinstalled the bcmwl-kernel as suggested on broadcom website, via synaptic package manager. Nothing has changed this situation ! I tried booting into liveUSB and then ifconfig results show wireless under wlan0. But then the wireless connects and loads the login page. So is the problem with the device configuration now ? i really want to get this fixed before i start configuring the other stuff like ATI graphics and such on the laptop for overheating.. lack of internet access is too bad a bug for me :P any help is appreciated!

    Read the article

  • In the Firing Line: The impact of project and portfolio performance on the CEO

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} What are the primary measurements for rating CEO performance? For corporate boards, business analysts, investors, and the trade press the metrics they deploy are relatively binary in nature; what is being done to generate earnings, and what is being done to build and sustain high performance? As for the market, interest is primarily aroused when operational and financial performance falls outside planned commitments for the year. When organizations announce better than predicted results, they usually experience an immediate increase in share price. Likewise, poor results have an obviously negative impact on the share price and impact the role and tenure of the incumbent CEO. The danger for the CEO is that the risk of failure is ever present, ranging from manufacturing delays and supply chain issues to labor shortages and scope creep. This risk is enhanced by the involvement of secondary suppliers providing services critical to overall work schedules, and magnified further across a portfolio of programs and projects underway at any one time – and all set within a global context. All can impact planned return on investment and have an inevitable impact on the share price – the primary empirical measure of day-to-day performance. Read this complete complementary report, In the Firing Line and explore what is the direct link between the health of the portfolio and CEO performance. This report will provide an overview of the responsibility the CEO has for implementing and maintaining a culture of accountability, offer examples of some of the higher profile project failings in recent years, and detail the capabilities available to the CEO to mitigate the risks residing in their own portfolios. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • CFOs: Do You Have a Playbook for Growth?

    - by Oracle Accelerate for Midsize Companies
    by Jim Lein, Oracle Midsize Programs In most global markets, CFOs are optimistic about their company's growth opportunities. Deloitte's CFO Signals Report, "Time to Accelerate" found that: In the U.K. business optimism is at its highest level in three-and-a-half years Optimism in North America rose from a strong +42% last quarter (Q2 to Q3 2013) to an even stronger +54%. The inaugural Southeast Asia survey, 44% of CFOs reported a positive outlook despite worries over the Chinese economy and political uncertainty. Sustainable and profitable business growth doesn't usually happen by accident. Company's need a playbook for growth that's owned by the CFO. And today, that playbook must leverage the six enabling technologies--Social, Big Data, Mobile, Cloud, Analytics, and The Internet of Things (or, as Oracle president Mark Hurd explains, "The Internet of the People"). On Monday June 9 at  2:00 pm Eastern, CFO.com is hosting a webcast, "The CFO Playbook on Growth: How CFOs Can Boost Efficiency and Performance with Automation". Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} “Investing in technology begins with a business metric driven business case with clear tangible business results expected," says John Lieblang, Affiliate Partner with Waterstone Management Group. "The progressive CFO has learned how to forge a partnership with the CIO to align everyone in the 'result value chain' to be accountable for the business results not just for functional technology.” Click HERE to register  Looking for more news and information about Oracle Solutions for Midsize Companies? Read the latest Oracle for Midsize Companies Newsletter Sign-up to receive the latest communications from Oracle’s industry leaders and experts Jim Lein I evangelize Oracle's enterprise solutions for growing midsize companies. I recently celebrated 15 years with Oracle, having joined JD Edwards in 1999. I'm based in Evergreen, Colorado and love relating stories about creativity and innovation whether they be about software, live music, or the mountains. The views expressed here are my own, and not necessarily those of Oracle.

    Read the article

  • Can anybody help me in designing my UITableView into MVC Pattern ?

    - by user2877880
    I have written a ViewController in which i get data from the internet and display it in a UItableview using a json parser which uses object for key to identify its objects. What i would like your help in is to convert it into MVC pattern to make it less clumsy instead of including everything in the same controller class. Please try explaining it to me in terms of my code. THANKS IN ADVANCE. The code is as given below #import "ViewController.h" #import "AFNetworking.h" #import "ModelTableArray.h" @implementation ViewController @synthesize tableView = _tableView, activityIndicatorView = _activityIndicatorView, movies = _movies; - (void)viewDidLoad { [super viewDidLoad]; // Setting Up Table View self.tableView = [[UITableView alloc] initWithFrame:CGRectMake(0.0, 0.0, self.view.bounds.size.width, self.view.bounds.size.height) style:UITableViewStylePlain]; self.tableView.dataSource = self; self.tableView.delegate = self; self.tableView.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight; self.tableView.hidden = YES; [self.view addSubview:self.tableView]; // Setting Up Activity Indicator View self.activityIndicatorView = [[UIActivityIndicatorView alloc] initWithActivityIndicatorStyle:UIActivityIndicatorViewStyleGray]; self.activityIndicatorView.hidesWhenStopped = YES; self.activityIndicatorView.center = self.view.center; [self.view addSubview:self.activityIndicatorView]; [self.activityIndicatorView startAnimating]; // Initializing Data Source self.movies = [[NSArray alloc] init]; NSURL *url = [[NSURL alloc] initWithString:@"http://itunes.apple.com/search?term=rocky&country=us&entity=movie"]; NSURLRequest *request = [[NSURLRequest alloc] initWithURL:url]; UIRefreshControl *refreshControl = [[UIRefreshControl alloc] init]; [refreshControl addTarget:self action:@selector(refresh:) forControlEvents:UIControlEventValueChanged]; [self.tableView addSubview:refreshControl]; [refreshControl endRefreshing]; AFJSONRequestOperation *operation = [AFJSONRequestOperation JSONRequestOperationWithRequest:request success:^(NSURLRequest *request, NSHTTPURLResponse *response, id JSON) { self.movies = [JSON objectForKey:@"results"]; [self.activityIndicatorView stopAnimating]; [self.tableView setHidden:NO]; [self.tableView reloadData]; } failure:^(NSURLRequest *request, NSHTTPURLResponse *response, NSError *error, id JSON) { NSLog(@"Request Failed with Error: %@, %@", error, error.userInfo); }]; [operation start]; } - (void)refresh:(UIRefreshControl *)sender { NSURL *url = [[NSURL alloc] initWithString:@"http://itunes.apple.com/search?term=rambo&country=us&entity=movie"]; NSURLRequest *request = [[NSURLRequest alloc] initWithURL:url]; AFJSONRequestOperation *operation = [AFJSONRequestOperation JSONRequestOperationWithRequest:request success:^(NSURLRequest *request, NSHTTPURLResponse *response, id JSON) { self.movies = [JSON objectForKey:@"results"]; [self.activityIndicatorView stopAnimating]; [self.tableView setHidden:NO]; [self.tableView reloadData]; } failure:^(NSURLRequest *request, NSHTTPURLResponse *response, NSError *error, id JSON) { NSLog(@"Request Failed with Error: %@, %@", error, error.userInfo); }]; [operation start]; [sender endRefreshing]; } - (void)viewDidUnload { [super viewDidUnload]; } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return YES; } // Table View Data Source Methods - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { if (self.movies && self.movies.count) { return self.movies.count; } else { return 0; } } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *cellID = @"Cell Identifier"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:cellID]; if (!cell) { cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleSubtitle reuseIdentifier:cellID]; } NSDictionary *movie = [self.movies objectAtIndex:indexPath.row]; cell.textLabel.text = [movie objectForKey:@"trackName"]; cell.detailTextLabel.text = [movie objectForKey:@"artistName"]; NSURL *url = [[NSURL alloc] initWithString:[movie objectForKey:@"artworkUrl100"]]; [cell.imageView setImageWithURL:url placeholderImage:[UIImage imageNamed:@"placeholder"]]; return cell; } @end

    Read the article

  • Web Service Example - Part 3: Asynchronous

    - by Denis T
    In this edition of the ADF Mobile blog we'll tackle part 3 of our Web Service examples.  In this posting we'll take a look at firing the web service asynchronously and then filling in the UI when it completes.  This can be useful when you have data on the device in a local store and want to show that to the user while the application uses lazy loading from a web service to load more data. Getting the sample code: Just click here to download a zip of the entire project.  You can unzip it and load it into JDeveloper and deploy it either to iOS or Android.  Please follow the previous blog posts if you need help getting JDeveloper or ADF Mobile installed.  Note: This is a different workspace than WS-Part2 What's different? In this example, when you click the Search button on the Forecast By Zip option, now it takes you directly to the results page, which is initially blank.  When the web service returns a second or two later the data pops into the UI.  If you go back to the search page and hit Search it will again clear the results and invoke the web service asynchronously.  This isn't really that useful for this particular example but it shows an important technique that can be used for other use cases. How it was done 1)  First we created a new class, ForecastWorker, that implements the Runnable interface.  This is used as our worker class that we create an instance of and pass to a new thread that we create when the Search button is pressed inside the retrieveForecast actionListener handler.  Once the thread is started, the retrieveForecast returns immediately.  2)  The rest of the code that we had previously in the retrieveForecast method has now been moved to the retrieveForecastAsync.  Note that we've also added synchronized specifiers on both these methods so they are protected from re-entrancy. 3)  The run method of the ForecastWorker class then calls the retrieveForecastAsync method.  This executes the web service code that we had previously, but now on a separate thread so the UI is not locked.  If we had already shown data on the screen it would have appeared before this was invoked.  Note that you do not see a loading indicator either because this is on a separate thread and nothing is blocked. 4)  The last but very important aspect of this method is that once we update data in the collections from the data we retrieve from the web service, we call AdfmfJavaUtilities.flushDataChangeEvents().   We need this because as data is updated in the background thread, those data change events are not propagated to the main thread until you explicitly flush them.  As soon as you do this, the UI will get updated if any changes have been queued. Summary of Fundamental Changes In This Application The most fundamental change is that we are invoking and handling our web services in a background thread and updating the UI when the data returns.  This allows an application to provide a better user experience in many cases because data that is already available locally is displayed while lengthy queries or web service calls can be done in the background and the UI updated when they return.  There are many different use cases for background threads and this is just one example of optimizing the user experience and generating a better mobile application. 

    Read the article

  • SEO/Google: How should I handle multiple countries and domains?

    - by Valorized
    Hello. I'm the webmaster of an online shop based in Austria (Europe). Therefore we registered "example.at". We also own different other domain names like "example-shop.com" and "example.info". Currently all those domains are redirected (301) to the .at one. Still available is: "example.net" and "example.org" (and .ws/.cc), unfortunately not available: .de/.eu The .com is currently owned by one of our partners, the contract ends in 2012 but until then we have no chance to get this one. Recently I read more about geo-targeting and I noticed ONE big deal. The tld ".at" is hardly recognised in Germany (google.de) whereas it is excellently listed in Austria (google.at). As a result of the .at I cannot set the target location manually (or to unlisted). More info: https://www.google.com/support/webmasters/bin/answer.py?answer=62399&hl=en This is a big problem. I looked at Google Analytics and - although Germany is 10x as big as Austria - there are more visits from Austria. So, how should I config the domain in order to get the best results in both, Germany and Austria? I thought of some solutions: First I could stop redirecting the .info. Then there would be a duplicate of the .at one. Moreover, in Webmastertools, I could set the target location of the .info to Germany. As the .at still targets Austria, both would be targeted - however I don't now if google punishes one of them because of the duplicate content? Same as 1. but with .net or .org (I think .info is not a "nice" domain and moreover I think search engines prefer .com, .net or .org to .info). Same as 1. (or 2.) but with a rel="canonical" on the new one (pointing to the .at). Con: I don't think this will improve the situation, because it still tells google that the .at one is more important, like: "if .info points to .at, the target may still be Austria". rel="canonical" on the .at pointing to the new (.info or .net or .org). However I fear that this will have a negative impact on the listing on google.at because: "Hey, the well-known .at is not important anymore, so let's focus on the .info which is not well-known." - Therefore: bad position in search results. Redirect .at to the new (.info or .net or .org) with a 301-Redirect. Con: Might be worse than 4, we might loose Page-Rank (or "the value of the page", because google says that page rank is not important anymore). Moreover this might be even more confusing for the customers. In 3. or 4. customers don't get redirected, they do not see the canonical-meta-tag. So, dear experts, please tell me what the best option would be! Thank you very much for your advice in advance and please excuse the long question. I really appreciate this network! Please note: It's exactly the same content AND language. In Austria we speak German.

    Read the article

  • Making user input/math on data fast, unlike excel type programs

    - by proGrammar
    I'm creating a research platform solely for myself to do some research on data. Programs like excel are terribly slow for me so I'm trying to come up with another solution. Originally I used excel. A1 was the cell that contained the data and all other cells in use calculated something on A1, or on other cells, that all could be in the end traced to A1. A1 was like an element of an array, I then I incremented it to go through all my data. This was way too slow. So the only other option I found originally was to hand code in c# the calculations inside a loop. Then I simply recompiled each time I changed my math. This was terribly slow to do and I had to order everything correctly so things would update correctly (dependencies). I could have also used events, but hand coding events for each cell like calculation would also be very slow. Next I created an application to read Excel and to perfectly imitate it. Which is what I now use. Basically I write formulas onto a fraction of my data to get live results inside excel. Then my program reads excel, writes another c# program, compiles it, and runs that program which runs my excel created formulas through a lot more data a whole lot faster. The advantage being my application dependency sorts everything (or I could use events) so I don't have to (like excel does) And of course the speed. But now its not a single application anymore. Instead its 2 applications, one which only reads my formulas and writes another program. The other one being the result which only lives for a short while before I do other runs through my data with different formulas / settings. So I can't see multiple results at one time without introducing even more programs like a database or at least having the 2 applications talking to each other. My idea was to have a dll that would be written, compiled, loaded, and unloaded again and again. So a self-updating program, sort of. But apparently that's not possible without another appdomain which means data has to be marshaled to be moved between the appdomains. Which would slow things down, not for summaries, but for other stuff I need to do with all my data. I'm also forgetting to mention a huge problem with restarting an application again and again which is having to reload ALL my data into memory again and again. But its still a whole lot faster than excel. I'm really super puzzled as to what people do when they want to research data fast. I'm completely unable to have a program accept user input and having it fast. My understanding is that it would have to do things like excel which is to evaluate strings again and again. So my only option is to repeatedly compile applications. Do I have a correct understanding on computer science? I've only just began programming, and didn't think I would have to learn much to do some simple math on data. My understanding is its either compiling my user defined stuff to a program or evaluating them from a string or something stupid again and again. And my only option is to probably switch operating systems or something to be able to have a program compile and run itself without stopping (writing/compiling dll, loading dll to program, unloading, and repeating). Can someone give me some idea on how computers work? Is anything better possible? Like a running program, that can accept user input and compile it and then unload it later? I mean heck operating systems dont need to be RESTARTED with every change to user input. What is this the cave man days? Sorry, it's just so super frustrating not knowing what one can do, and can't do. If only I could understand and learn this stuff fast enough.

    Read the article

  • Using an alternate JSON Serializer in ASP.NET Web API

    - by Rick Strahl
    The new ASP.NET Web API that Microsoft released alongside MVC 4.0 Beta last week is a great framework for building REST and AJAX APIs. I've been working with it for quite a while now and I really like the way it works and the complete set of features it provides 'in the box'. It's about time that Microsoft gets a decent API for building generic HTTP endpoints into the framework. DataContractJsonSerializer sucks As nice as Web API's overall design is one thing still sucks: The built-in JSON Serialization uses the DataContractJsonSerializer which is just too limiting for many scenarios. The biggest issues I have with it are: No support for untyped values (object, dynamic, Anonymous Types) MS AJAX style Date Formatting Ugly serialization formats for types like Dictionaries To me the most serious issue is dealing with serialization of untyped objects. I have number of applications with AJAX front ends that dynamically reformat data from business objects to fit a specific message format that certain UI components require. The most common scenario I have there are IEnumerable query results from a database with fields from the result set rearranged to fit the sometimes unconventional formats required for the UI components (like jqGrid for example). Creating custom types to fit these messages seems like overkill and projections using Linq makes this much easier to code up. Alas DataContractJsonSerializer doesn't support it. Neither does DataContractSerializer for XML output for that matter. What this means is that you can't do stuff like this in Web API out of the box:public object GetAnonymousType() { return new { name = "Rick", company = "West Wind", entered= DateTime.Now }; } Basically anything that doesn't have an explicit type DataContractJsonSerializer will not let you return. FWIW, the same is true for XmlSerializer which also doesn't work with non-typed values for serialization. The example above is obviously contrived with a hardcoded object graph, but it's not uncommon to get dynamic values returned from queries that have anonymous types for their result projections. Apparently there's a good possibility that Microsoft will ship Json.NET as part of Web API RTM release.  Scott Hanselman confirmed this as a footnote in his JSON Dates post a few days ago. I've heard several other people from Microsoft confirm that Json.NET will be included and be the default JSON serializer, but no details yet in what capacity it will show up. Let's hope it ends up as the default in the box. Meanwhile this post will show you how you can use it today with the beta and get JSON that matches what you should see in the RTM version. What about JsonValue? To be fair Web API DOES include a new JsonValue/JsonObject/JsonArray type that allow you to address some of these scenarios. JsonValue is a new type in the System.Json assembly that can be used to build up an object graph based on a dictionary. It's actually a really cool implementation of a dynamic type that allows you to create an object graph and spit it out to JSON without having to create .NET type first. JsonValue can also receive a JSON string and parse it without having to actually load it into a .NET type (which is something that's been missing in the core framework). This is really useful if you get a JSON result from an arbitrary service and you don't want to explicitly create a mapping type for the data returned. For serialization you can create an object structure on the fly and pass it back as part of an Web API action method like this:public JsonValue GetJsonValue() { dynamic json = new JsonObject(); json.name = "Rick"; json.company = "West Wind"; json.entered = DateTime.Now; dynamic address = new JsonObject(); address.street = "32 Kaiea"; address.zip = "96779"; json.address = address; dynamic phones = new JsonArray(); json.phoneNumbers = phones; dynamic phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); //var jsonString = json.ToString(); return json; } which produces the following output (formatted here for easier reading):{ name: "rick", company: "West Wind", entered: "2012-03-08T15:33:19.673-10:00", address: { street: "32 Kaiea", zip: "96779" }, phoneNumbers: [ { type: "Home", number: "808 123-1233" }, { type: "Mobile", number: "808 123-1234" }] } If you need to build a simple JSON type on the fly these types work great. But if you have an existing type - or worse a query result/list that's already formatted JsonValue et al. become a pain to work with. As far as I can see there's no way to just throw an object instance at JsonValue and have it convert into JsonValue dictionary. It's a manual process. Using alternate Serializers in Web API So, currently the default serializer in WebAPI is DataContractJsonSeriaizer and I don't like it. You may not either, but luckily you can swap the serializer fairly easily. If you'd rather use the JavaScriptSerializer built into System.Web.Extensions or Json.NET today, it's not too difficult to create a custom MediaTypeFormatter that uses these serializers and can replace or partially replace the native serializer. Here's a MediaTypeFormatter implementation using the ASP.NET JavaScriptSerializer:using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using System.IO; namespace Westwind.Web.WebApi { public class JavaScriptSerializerFormatter : MediaTypeFormatter { public JavaScriptSerializerFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type== typeof(JsonArray) ) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var ser = new JavaScriptSerializer(); string json; using (var sr = new StreamReader(stream)) { json = sr.ReadToEnd(); sr.Close(); } object val = ser.Deserialize(json,type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var ser = new JavaScriptSerializer(); var json = ser.Serialize(value); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } Formatter implementation is pretty simple: You override 4 methods to tell which types you can handle and then handle the input or output streams to create/parse the JSON data. Note that when creating output you want to take care to still allow JsonValue/JsonObject/JsonArray types to be handled by the default serializer so those objects serialize properly - if you let either JavaScriptSerializer or JSON.NET handle them they'd try to render the dictionaries which is very undesirable. If you'd rather use Json.NET here's the JSON.NET version of the formatter:// this code requires a reference to JSON.NET in your project #if true using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using Newtonsoft.Json; using System.IO; using Newtonsoft.Json.Converters; namespace Westwind.Web.WebApi { public class JsonNetFormatter : MediaTypeFormatter { public JsonNetFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type == typeof(JsonArray)) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; var sr = new StreamReader(stream); var jreader = new JsonTextReader(sr); var ser = new JsonSerializer(); ser.Converters.Add(new IsoDateTimeConverter()); object val = ser.Deserialize(jreader, type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; string json = JsonConvert.SerializeObject(value, Formatting.Indented, new JsonConverter[1] { new IsoDateTimeConverter() } ); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } #endif   One advantage of the Json.NET serializer is that you can specify a few options on how things are formatted and handled. You get null value handling and you can plug in the IsoDateTimeConverter which is nice to product proper ISO dates that I would expect any Json serializer to output these days. Hooking up the Formatters Once you've created the custom formatters you need to enable them for your Web API application. To do this use the GlobalConfiguration.Configuration object and add the formatter to the Formatters collection. Here's what this looks like hooked up from Application_Start in a Web project:protected void Application_Start(object sender, EventArgs e) { // Action based routing (used for RPC calls) RouteTable.Routes.MapHttpRoute( name: "StockApi", routeTemplate: "stocks/{action}/{symbol}", defaults: new { symbol = RouteParameter.Optional, controller = "StockApi" } ); // WebApi Configuration to hook up formatters and message handlers // optional RegisterApis(GlobalConfiguration.Configuration); } public static void RegisterApis(HttpConfiguration config) { // Add JavaScriptSerializer formatter instead - add at top to make default //config.Formatters.Insert(0, new JavaScriptSerializerFormatter()); // Add Json.net formatter - add at the top so it fires first! // This leaves the old one in place so JsonValue/JsonObject/JsonArray still are handled config.Formatters.Insert(0, new JsonNetFormatter()); } One thing to remember here is the GlobalConfiguration object which is Web API's static configuration instance. I think this thing is seriously misnamed given that GlobalConfiguration could stand for anything and so is hard to discover if you don't know what you're looking for. How about WebApiConfiguration or something more descriptive? Anyway, once you know what it is you can use the Formatters collection to insert your custom formatter. Note that I insert my formatter at the top of the list so it takes precedence over the default formatter. I also am not removing the old formatter because I still want JsonValue/JsonObject/JsonArray to be handled by the default serialization mechanism. Since they process in sequence and I exclude processing for these types JsonValue et al. still get properly serialized/deserialized. Summary Currently DataContractJsonSerializer in Web API is a pain, but at least we have the ability with relatively limited effort to replace the MediaTypeFormatter and plug in our own JSON serializer. This is useful for many scenarios - if you have existing client applications that used MVC JsonResult or ASP.NET AJAX results from ASMX AJAX services you can plug in the JavaScript serializer and get exactly the same serializer you used in the past so your results will be the same and don't potentially break clients. JSON serializers do vary a bit in how they serialize some of the more complex types (like Dictionaries and dates for example) and so if you're migrating it might be helpful to ensure your client code doesn't break when you switch to ASP.NET Web API. Going forward it looks like Microsoft is planning on plugging in Json.Net into Web API and make that the default. I think that's an awesome choice since Json.net has been around forever, is fast and easy to use and provides a ton of functionality as part of this great library. I just wish Microsoft would have figured this out sooner instead of now at the last minute integrating with it especially given that Json.Net has a similar set of lower level JSON objects JsonValue/JsonObject etc. which now will end up being duplicated by the native System.Json stuff. It's not like we don't already have enough confusion regarding which JSON serializer to use (JavaScriptSerializer, DataContractJsonSerializer, JsonValue/JsonObject/JsonArray and now Json.net). For years I've been using my own JSON serializer because the built in choices are both limited. However, with an official encorsement of Json.Net I'm happily moving on to use that in my applications. Let's see and hope Microsoft gets this right before ASP.NET Web API goes gold.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX  ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • What is the best Battleship AI?

    - by John Gietzen
    Battleship! Back in 2003, (when I was 17,) I competed in a Battleship AI coding competition. Even though I lost that tournament, I had a lot of fun and learned a lot from it. Now, I would like to resurrect this competition, in the search of the best battleship AI. Here is the framework: Battleship.zip The winner will be awarded +450 reputation! The competition will be held starting on the 17th of November, 2009. No entries or edits later than zero-hour on the 17th will be accepted. (Central Standard Time) Submit your entries early, so you don't miss your opportunity! To keep this OBJECTIVE, please follow the spirit of the competition. Rules of the game: The game is be played on a 10x10 grid. Each competitor will place each of 5 ships (of lengths 2, 3, 3, 4, 5) on their grid. No ships may overlap, but they may be adjacent. The competitors then take turns firing single shots at their opponent. A variation on the game allows firing multiple shots per volley, one for each surviving ship. The opponent will notify the competitor if the shot sinks, hits, or misses. Game play ends when all of the ships of any one player are sunk. Rules of the competition: The spirit of the competition is to find the best Battleship algorithm. Anything that is deemed against the spirit of the competition will be grounds for disqualification. Interfering with an opponent is against the spirit of the competition. Multithreading may be used under the following restrictions: No more than one thread may be running while it is not your turn. (Though, any number of threads may be in a "Suspended" state). No thread may run at a priority other than "Normal". Given the above two restrictions, you will be guaranteed at least 3 dedicated CPU cores during your turn. A limit of 1 second of CPU time per game is allotted to each competitor on the primary thread. Running out of time results in losing the current game. Any unhandled exception will result in losing the current game. Network access and disk access is allowed, but you may find the time restrictions fairly prohibitive. However, a few set-up and tear-down methods have been added to alleviate the time strain. Code should be posted on stack overflow as an answer, or, if too large, linked. Max total size (un-compressed) of an entry is 1 MB. Officially, .Net 2.0 / 3.5 is the only framework requirement. Your entry must implement the IBattleshipOpponent interface. Scoring: Best 51 games out of 101 games is the winner of a match. All competitors will play matched against each other, round-robin style. The best half of the competitors will then play a double-elimination tournament to determine the winner. (Smallest power of two that is greater than or equal to half, actually.) I will be using the TournamentApi framework for the tournament. The results will be posted here. If you submit more than one entry, only your best-scoring entry is eligible for the double-elim. Good luck! Have fun! EDIT 1: Thanks to Freed, who has found an error in the Ship.IsValid function. It has been fixed. Please download the updated version of the framework. EDIT 2: Since there has been significant interest in persisting stats to disk and such, I have added a few non-timed set-up and tear-down events that should provide the required functionality. This is a semi-breaking change. That is to say: the interface has been modified to add functions, but no body is required for them. Please download the updated version of the framework. EDIT 3: Bug Fix 1: GameWon and GameLost were only getting called in the case of a time out. Bug Fix 2: If an engine was timing out every game, the competition would never end. Please download the updated version of the framework. EDIT 4: Results!

    Read the article

  • pagination in fbjs/ajax

    - by fusion
    i've a search form in which i'm trying to implement pagination - getting the data through ajax. everything works out fine initially, except when i go to the next page or any of the links on the pagination. it gives me a page not found error. can anyone please point out what is wrong with my code? search.html <div class="search_wrapper"> <input type="text" name="query" id="query" class="txt_search" onkeyup="submitPage('http://website/name/search.php', 'txtHint', '1');" /> <input type="button" name="button" class="button_search" onclick="submitPage('http://website/name/search.php', 'txtHint', '1');" /> <p> <div id="txtHint"></div> </p> </div> search ajax.js: function submitPage(url, target_id, page) { // Retrieve element handles, and populate request parameters. var target = document.getElementById(target_id); if(typeof page == 'undefined') { page = 1; } // Set up an AJAX object. Typically, an FBML response is desired. document.getElementById(target_id).setInnerXHTML('<span id="caric"><center><img src="http://website/name/images/ajax-loader.gif" /></center></span>'); var ajax = new Ajax(); ajax.responseType = Ajax.FBML; ajax.requireLogin = true; ajax.ondone = function(data) { // When the FBML response is returned, populate the data into the target element. document.getElementById('caric').setStyle('display','none'); if (target) target.setInnerFBML(data); } ajax.onerror = function() { var msgdialog = new Dialog(); msgdialog.showMessage('Error', 'An error has occurred while trying to load.'); return false; } var params = { 'query' : document.getElementById('query').getValue() }; ajax.post(url, params, page); } search.php: $search_result = ""; if (isset($_POST["query"])) $search_result = trim($_POST["query"]); if(isset($_GET['page'])) $page = $_GET['page']; else $page = 1; ..... $self = $_SERVER['PHP_SELF']; $limit = 2; //Number of results per page $numpages=ceil($totalrows/$limit); $query = $query." ORDER BY idQuotes LIMIT " . ($page-1)*$limit . ",$limit"; $result = mysql_query($query, $conn) or die('Error:' .mysql_error()); ?> <div class="search_caption">Search Results</div> <div class="search_div"> <table> . . .display results </table> </div> <hr> <div class="searchmain"> <?php //Create and print the Navigation bar $nav=""; $next = $page+1; $prev = $page-1; if($page > 1) { $nav .= "<a onclick=\"submitPage('','','$prev'); return false;\" href=\"$self?page=" . $prev . "&q=" .urlencode($search_result) . "\">< Prev</a>"; $first = "<a onclick=\"submitPage('','','1'); return false;\" href=\"$self?page=1&q=" .urlencode($search_result) . "\"> << </a>" ; } else { $nav .= "&nbsp;"; $first = "&nbsp;"; } for($i = 1 ; $i <= $numpages ; $i++) { if($i == $page) { $nav .= "<span class=\"no_link\">$i</span>"; }else{ $nav .= "<a onclick=\"submitPage('','',$i); return false;\" href=\"$self?page=" . $i . "&q=" .urlencode($search_result) . "\">$i</a>"; } } if($page < $numpages) { $nav .= "<a onclick=\"submitPage('','','$next'); return false;\" href=\"$self?page=" . $next . "&q=" .urlencode($search_result) . "\">Next ></a>"; $last = "<a onclick=\"submitPage('','','$numpages'); return false;\" href=\"$self?page=$numpages&q=" .urlencode($search_result) . "\"> >> </a>"; } else { $nav .= "&nbsp;"; $last = "&nbsp;"; } echo $first . $nav . $last; ?> </div> this is the link which displays on the next page: http://apps.facebook.com/website-folder/search.php?page=2&q=good&_fb_fromhash=[some obscure number]

    Read the article

  • How to count each digit in a range of integers?

    - by Carlos Gutiérrez
    Imagine you sell those metallic digits used to number houses, locker doors, hotel rooms, etc. You need to find how many of each digit to ship when your customer needs to number doors/houses: 1 to 100 51 to 300 1 to 2,000 with zeros to the left The obvious solution is to do a loop from the first to the last number, convert the counter to a string with or without zeros to the left, extract each digit and use it as an index to increment an array of 10 integers. I wonder if there is a better way to solve this, without having to loop through the entire integers range. Solutions in any language or pseudocode are welcome. Edit: Answers review John at CashCommons and Wayne Conrad comment that my current approach is good and fast enough. Let me use a silly analogy: If you were given the task of counting the squares in a chess board in less than 1 minute, you could finish the task by counting the squares one by one, but a better solution is to count the sides and do a multiplication, because you later may be asked to count the tiles in a building. Alex Reisner points to a very interesting mathematical law that, unfortunately, doesn’t seem to be relevant to this problem. Andres suggests the same algorithm I’m using, but extracting digits with %10 operations instead of substrings. John at CashCommons and phord propose pre-calculating the digits required and storing them in a lookup table or, for raw speed, an array. This could be a good solution if we had an absolute, unmovable, set in stone, maximum integer value. I’ve never seen one of those. High-Performance Mark and strainer computed the needed digits for various ranges. The result for one millon seems to indicate there is a proportion, but the results for other number show different proportions. strainer found some formulas that may be used to count digit for number which are a power of ten. Robert Harvey had a very interesting experience posting the question at MathOverflow. One of the math guys wrote a solution using mathematical notation. Aaronaught developed and tested a solution using mathematics. After posting it he reviewed the formulas originated from Math Overflow and found a flaw in it (point to Stackoverflow :). noahlavine developed an algorithm and presented it in pseudocode. A new solution After reading all the answers, and doing some experiments, I found that for a range of integer from 1 to 10n-1: For digits 1 to 9, n*10(n-1) pieces are needed For digit 0, if not using leading zeros, n*10n-1 - ((10n-1) / 9) are needed For digit 0, if using leading zeros, n*10n-1 - n are needed The first formula was found by strainer (and probably by others), and I found the other two by trial and error (but they may be included in other answers). For example, if n = 6, range is 1 to 999,999: For digits 1 to 9 we need 6*105 = 600,000 of each one For digit 0, without leading zeros, we need 6*105 – (106-1)/9 = 600,000 - 111,111 = 488,889 For digit 0, with leading zeros, we need 6*105 – 6 = 599,994 These numbers can be checked using High-Performance Mark results. Using these formulas, I improved the original algorithm. It still loops from the first to the last number in the range of integers, but, if it finds a number which is a power of ten, it uses the formulas to add to the digits count the quantity for a full range of 1 to 9 or 1 to 99 or 1 to 999 etc. Here's the algorithm in pseudocode: integer First,Last //First and last number in the range integer Number //Current number in the loop integer Power //Power is the n in 10^n in the formulas integer Nines //Nines is the resut of 10^n - 1, 10^5 - 1 = 99999 integer Prefix //First digits in a number. For 14,200, prefix is 142 array 0..9 Digits //Will hold the count for all the digits FOR Number = First TO Last CALL TallyDigitsForOneNumber WITH Number,1 //Tally the count of each digit //in the number, increment by 1 //Start of optimization. Comments are for Number = 1,000 and Last = 8,000. Power = Zeros at the end of number //For 1,000, Power = 3 IF Power 0 //The number ends in 0 00 000 etc Nines = 10^Power-1 //Nines = 10^3 - 1 = 1000 - 1 = 999 IF Number+Nines <= Last //If 1,000+999 < 8,000, add a full set Digits[0-9] += Power*10^(Power-1) //Add 3*10^(3-1) = 300 to digits 0 to 9 Digits[0] -= -Power //Adjust digit 0 (leading zeros formula) Prefix = First digits of Number //For 1000, prefix is 1 CALL TallyDigitsForOneNumber WITH Prefix,Nines //Tally the count of each //digit in prefix, //increment by 999 Number += Nines //Increment the loop counter 999 cycles ENDIF ENDIF //End of optimization ENDFOR SUBROUTINE TallyDigitsForOneNumber PARAMS Number,Count REPEAT Digits [ Number % 10 ] += Count Number = Number / 10 UNTIL Number = 0 For example, for range 786 to 3,021, the counter will be incremented: By 1 from 786 to 790 (5 cycles) By 9 from 790 to 799 (1 cycle) By 1 from 799 to 800 By 99 from 800 to 899 By 1 from 899 to 900 By 99 from 900 to 999 By 1 from 999 to 1000 By 999 from 1000 to 1999 By 1 from 1999 to 2000 By 999 from 2000 to 2999 By 1 from 2999 to 3000 By 1 from 3000 to 3010 (10 cycles) By 9 from 3010 to 3019 (1 cycle) By 1 from 3019 to 3021 (2 cycles) Total: 28 cycles Without optimization: 2,235 cycles Note that this algorithm solves the problem without leading zeros. To use it with leading zeros, I used a hack: If range 700 to 1,000 with leading zeros is needed, use the algorithm for 10,700 to 11,000 and then substract 1,000 - 700 = 300 from the count of digit 1. Benchmark and Source code I tested the original approach, the same approach using %10 and the new solution for some large ranges, with these results: Original 104.78 seconds With %10 83.66 With Powers of Ten 0.07 A screenshot of the benchmark application: If you would like to see the full source code or run the benchmark, use these links: Complete Source code (in Clarion): http://sca.mx/ftp/countdigits.txt Compilable project and win32 exe: http://sca.mx/ftp/countdigits.zip Accepted answer noahlavine solution may be correct, but l just couldn’t follow the pseudo code, I think there are some details missing or not completely explained. Aaronaught solution seems to be correct, but the code is just too complex for my taste. I accepted strainer’s answer, because his line of thought guided me to develop this new solution.

    Read the article

  • MSCC: Global Windows Azure Bootcamp

    Mauritius participated and contributed to the Global Windows Azure Bootcamp 2014 (GWAB). Again! And this time stronger than ever, and together with 137 other locations in 56 countries world-wide. We had 62 named registrations, 7 guest additions and approximately 10 offline participants prior to the event day. Most interestingly the organisation of the GWAB through the MSCC helped to increased the number of craftsmen. The Mauritius Software Craftsmanship Community has currently 138 registered members - in less than one year! Only with those numbers we can proudly say that all the preparations and hard work towards this event already paid off. Personally, I'm really grateful that we had this kind of response and the feedback from some attendees confirmed that the MSCC is on the right track here on Cyber Island Mauritius. Inspired and motivated by the success of this event, rest assured that there will be more public events like the GWAB. This time it took some time to reflect on our meetup, following my first impression right on spot: "Wow, what an experience to organise and participate in this global event. Overall, I've been very pleased with the preparations and the event itself. Surely, there have been some nicks that we have to address and to improve for future activities like this. Quite frankly, we are not professional event organisers (not yet) but we learned a lot over the past couple of days. A big Thank You to our event sponsors, namely Microsoft Indian Ocean Islands & French Pacific, Ceridian Mauritius and Emtel. Without them this event wouldn't have happened - Thank You! And to the cool team members of Microsoft Student Partners (MSPs). You geeks did a great job! Thanks!" So, how many attendees did we actually have? 61! - Awesome - 61 cloud computing instances to help on the research of diabetes. During Saturday afternoon there was even an online publication on L'Express: Les développeurs mauriciens se joignent au combat contre le diabète Reactions of other attendees Don't take my word for granted... Here are some impressions and feedback from our participants: "Awesome event, really appreciated the presentations :-)" -- Kevin on event comments "very interesting and enriching." -- Diana on event comments "#gwab #gwabmru 2014 great success. Looking forward for gwab 2015" -- Wasiim on Twitter "Was there till the end. Awesome Event. I'll surely join upcoming meetup sessions :)" -- Luchmun on event comments "#gwabmru was not that cool. left early" -- Mohammad on Twitter The overall feedback is positive but we are absolutely aware that there quite a number of problems we had to face. We are already looking into that and ideas / action plans on how we will be able to improve it for future events. The sessions We started the day with welcoming speeches by Thierry Coret, Sr. Marketing Manager of Microsoft Indian Ocean Islands & French Pacific and Vidia Mooneegan, Managing Director and Sr. Vice President of Ceridian Mauritius. The clear emphasis was on the endless possibilities of cloud computing and how it can enable any kind of sectors here in the country. Then it was about time to set up the cloud computing services in order to contribute each attendees cloud computing resources to the global research of diabetes, a step by step guide presented by Arnaud Meslier, Technical Evangelist at Microsoft. Given a rendering package and a configuration file it was very interesting to follow the single steps in Windows Azure. Also, during the day we were not sure whether the set up had been correctly, as Mauritius didn't show up on the results board - which should have been the case after approximately 20 to 30 minutes. Anyways, let the minions work... Next, Arnaud gave a brief overview of the variety of services Windows Azure has to offer. Whether you need a development environment for your websites or mobiles app, running a virtual machine with your existing applications or simply putting a SQL database online. No worries, Windows Azure has the right packages available and the online management portal is really easy t handle. After this, we got a little bit more business oriented while Wasiim Hosenbocus, employee at Ceridian, took the attendees through the inerts of a real-life application, and demoed a couple of the existing features. He did a great job by showing how the different services of Windows Azure can be created and pulled together. After the lunch break it is always tough to keep the audience awake... And it was my turn. I gave a brief overview on operating and managing a SQL database on Windows Azure. Well, there are actually two options available and depending on your individual requirements you should be aware of both. The simpler version is called SQL Database and while provisioning only takes a couple of seconds, you should take into consideration that SQL Database has a number of constraints, like limitations on the actual database size - up to 5 GB as web edition and up to 150 GB maximum as business edition -, among others. Next, it was Chervine Bhiwoo's session on Windows Azure Mobile Services. It was absolutely amazing to see that the mobiles services directly offers you various project templates, like Windows 8 Store App, Android app, iOS app, and even Xamarin cross-platform app development. So, within a couple of minutes you can have your first mobile app active and running on Windows Azure. Furthermore, Chervine showed the attendees that adding another user interface, like Web Sites running on ASP.NET MVC 4 only takes a couple of minutes, too. And last but not least, we rounded up the day with Windows Azure Websites and hosting of Virtual Machines presented by some members of the local Microsoft Students Partners programme. Surely, one of the big advantages using Windows Azure is the availability of pre-defined installation packages of known web applications, like WordPress, Joomla!, or Ghost. Compared to running your own web site with a traditional web hoster it is surely en par, and depending on your personal level of expertise, Windows Azure provides you more liberty in terms of configuration than maybe a shared hosting environment. Running a pre-defined web application is one thing but in case that you would like to have more control over your hosting environment it is highly advised to opt for a virtual machine. Provisioning of an Ubuntu 12.04 LTS system was very simple to do but it takes some more minutes than you might expect. So, please be patient and take your time while Windows Azure gets everything in place for you. Afterwards, you can use a SecureShell (ssh) client like Putty in case of a Linux-based machine, or Remote Desktop Services when operating a Windows Server system to log in into your virtual machine. At the end of the day we had a great Q&A session and we finalised the event with our raffle of goodies. Participation in the raffle was bound to submission of the session survey and most gratefully we had a give-away for everyone. What a nice coincidence to finish of the day. Note: All session slides (and demo codes) will be made available on the MSCC event page. Please, check the Files section over there. (Some) Visual impressions from the event Just to give you an idea about what has happened during the GWAB 2014 at Ebene... Speakers and Microsoft Student Partners are getting ready for the Global Windows Azure Bootcamp 2014 GWAB 2014 attendees are fully integrated into the hands-on-labs and setting up their individuals cloud computing services 60 attendees at the GWAB 2014. Despite some technical difficulties we had a great time running the event GWAB 2014: Using the lunch break for networking and exchange of ideas - Great conversations and topics amongst attendees There are more pictures on the original event page: Questions & Answers Following are a couple of questions which have been asked and didn't get an answer during the event: Q: Is it possible to upload static pages via FTP? A: Yes, you can. Have a look at the right side column on the dashboard of your website. There you'll find information about the FTP and SFTP host names. You can use any FTP client, like ie. FileZilla to log in. FTP also gives you access to your log files. Q: What are the size limitations on SQL Database? A: 5 GB on the Web Edition, and 150 GB on the business edition. A maximum 150 databases (inclusing 'master') per SQL Database server. More details here: General Guidelines and Limitations (Azure SQL Database) Q: What's the maximum size of a SQL Server running in a Virtual Machine? A: The maximum Windows Azure VM has currently 8 CPU cores, 14 or 56 GB of RAM and 16x 1 TB hard drives. More details here: Virtual Machine and Cloud Service Sizes for Azure Q: How can we register for Windows Azure? A: Mauritius is currently not listed for phone verification. Please get in touch with Arnaud Meslier at Microsoft IOI & FP Q: Can I use my own domain name for Windows Azure websites? A: Yes, you can. But this might require to upscale your account to Standard. In case that I missed a question and answer, please use the comment section at the end of the article. Thanks! Final results Every participant was instructed during the hands-on-lab session on how to set up a cloud computing service in their account. Of course, I won't keep the results from you... Global Azure Lab GWAB 2014: Our cloud computing contribution to the research of diabetes And I would say Mauritius did a good job! Upcoming Events What are the upcoming events here in Mauritius? So far, we have the following ones (incomplete list as usual) in chronological order: Launch of Microsoft SQL Server 2014 (15.4.2014) Corsair Hackers Reboot (19.4.2014) WebCup (TBA ~ June 2014) Developers Conference (TBA ~ July 2014) Linuxfest 2014 (TBA ~ November 2014) Hopefully, there will be more announcements during the next couple of weeks and months. If you know about any other event, like a bootcamp, a code challenge or hackathon here in Mauritius, please drop me a note in the comment section below this article. Thanks! Networking and job/project opportunities Despite having technical presentations on Windows Azure an event like this always offers a great bunch of possibilities and opportunities to get in touch with new people in IT, have an exchange of experience with other like-minded people. As I already wrote about Communities - The importance of exchange and discussion - I had a great conversation with representatives of the University des Mascareignes which are currently embracing cloud infrastructure and cloud computing for their various campuses in the Indian Ocean. As for the MSCC it would be a great experience to stay in touch with them, and to work on upcoming, common activities. Furthermore, I had a very good conversation with Thierry and Ludovic of Microsoft IOI & FP on the necessity of user groups and IT communities here on the island. It's great to see that the MSCC is currently on a run and that local companies are sharing our thoughts on promoting IT careers and exchange of IT knowledge in such an open way. I'm also looking forward to be able to participate and to contribute on more events in the near future. My resume of the day We learned a lot today and there is always room for improvement! It was an awesome event and quite frankly it was a pleasure to spend the day with so many enthuastic IT people in the same room. It was a great experience to organise such event locally and participate on a global scale to support the GlyQ-IQ Technology in their research on diabetes. I was so pleased to see the involvement of new MSCC members in taking the opportunity to share and learn about the power of cloud computing. The Mauritius Software Craftsmanship Community is on the right way and this year's bootcamp on Windows Azure only marked the beginning of our journey. Thank you to our sponsors and my kudos to the MSPs! Update: Media coverage The event has been reported in local media, too. Following are some resources: Orange - Local - Business: Le cloud, pour des recherches approfondies sur le diabète Maurice Info.mu: Le cloud, pour des recherches approfondies sur le diabète Le Quotidien Pg 2: Global Windows Azure Bootcamp 2014 - Le cloud pour des recherches approfondies sur le diabète The Observer Pg 12: Le cloud, pour des recherches approfondies sur le diabète

    Read the article

  • Pagination links broken - php/jquery

    - by ClarkSKent
    Hey, I'm still trying to get my pagination links to load properly dynamically. But I can't seem to find a solution to this one problem. vote down star Hi everyone, I am still trying to figure out how to fix my pagination script to work properly. the problem I am having is when I click any of the pagination number links to go the next page, the new content does not load. literally nothing happens and when looking at the console in Firebug, nothing is sent or loaded. I have on the main page 3 links to filter the content and display it. When any of these links are clicked the results are loaded and displayed along with the associated pagination numbers for that specific content. I believe the problem is coming from the sql query in generate_pagination.php (seen below). When I hard code the sql category part it works, but is not dynamic at all. This is why I'm calling $ids=$_GET['ids']; and trying to put that into the category section but then the numbers don't display at all. If I echo out the $ids variable and click on a filter it does display the correct name/id, so I don't know why this doesn't work Here is the main page so you can see how I am including and starting the function(I'm new to php): <?php include_once('generate_pagination.php'); ?> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.1/jquery.min.js"></script> <script type="text/javascript" src="jquery_pagination.js"></script> <div id="loading" ></div> <div id="content" data-page="1"></div> <ul id="pagination"> <?php //Pagination Numbers for($i=1; $i<=$pages; $i++) { echo '<li class="page_numbers" id="page'.$i.'">'.$i.'</li>'; } ?> </ul> <br /> <br /> <a href="#" class="category" id="marketing">Marketing</a> <a href="#" class="category" id="automotive">Automotive</a> <a href="#" class="category" id="sports">Sports</a> Here is the generate pagination where the problem seems to occur: <?php $ids=$_GET['ids']; include_once('config.php'); $per_page = 3; //Calculating no of pages $sql = "SELECT COUNT(*) FROM explore WHERE category='$ids'"; $result = mysql_query($sql); $count = mysql_fetch_row($result); $pages = ceil($count[0]/$per_page); ?> I thought I might as well post the jquery script if someone wants to see: $(document).ready(function(){ //Display Loading Image function Display_Load() { $("#loading").fadeIn(900,0); $("#loading").html("<img src='bigLoader.gif' />"); } //Hide Loading Image function Hide_Load() { $("#loading").fadeOut('slow'); }; //Default Starting Page Results $("#pagination li:first").css({'color' : '#FF0084'}).css({'border' : 'none'}); Display_Load(); $("#content").load("pagination_data.php?page=1", Hide_Load()); // Editing below. // Sort content Marketing $("a.category").click(function() { Display_Load(); var this_id = $(this).attr('id'); $.get("pagination.php", { category: this.id }, function(data){ //Load your results into the page var pageNum = $('#content').attr('data-page'); $("#pagination").load('generate_pagination.php?category=' + pageNum +'&ids='+ this_id ); $("#content").load("filter_marketing.php?page=" + pageNum +'&id='+ this_id, Hide_Load()); }); }); //Pagination Click $("#pagination li").click(function(){ Display_Load(); //CSS Styles $("#pagination li") .css({'border' : 'solid #dddddd 1px'}) .css({'color' : '#0063DC'}); $(this) .css({'color' : '#FF0084'}) .css({'border' : 'none'}); //Loading Data var pageNum = $(this).attr("id").replace("page",""); $("#content").load("pagination_data.php?page=" + pageNum, function(){ $(this).attr('data-page', pageNum); Hide_Load(); }); }); }); If any could assist me on solving this problem that would be great, thanks.

    Read the article

  • Node.js vs PHP processing speed

    - by Cody Craven
    I've been looking into node.js recently and wanted to see a true comparison of processing speed for PHP vs Node.js. In most of the comparisons I had seen, Node trounced Apache/PHP set ups handily. However all of the tests were small 'hello worlds' that would not accurately reflect any webpage's markup. So I decided to create a basic HTML page with 10,000 hello world paragraph elements. In these tests Node with Cluster was beaten to a pulp by PHP on Nginx utilizing PHP-FPM. So I'm curious if I am misusing Node somehow or if Node is really just this bad at processing power. Note that my results were equivalent outputting "Hello world\n" with text/plain as the HTML, but I only included the HTML as it's closer to the use case I was investigating. My testing box: Core i7-2600 Intel CPU (has 8 threads with 4 cores) 8GB DDR3 RAM Fedora 16 64bit Node.js v0.6.13 Nginx v1.0.13 PHP v5.3.10 (with PHP-FPM) My test scripts: Node.js script var cluster = require('cluster'); var http = require('http'); var numCPUs = require('os').cpus().length; if (cluster.isMaster) { // Fork workers. for (var i = 0; i < numCPUs; i++) { cluster.fork(); } cluster.on('death', function (worker) { console.log('worker ' + worker.pid + ' died'); }); } else { // Worker processes have an HTTP server. http.Server(function (req, res) { res.writeHead(200, {'Content-Type': 'text/html'}); res.write('<html>\n<head>\n<title>Speed test</title>\n</head>\n<body>\n'); for (var i = 0; i < 10000; i++) { res.write('<p>Hello world</p>\n'); } res.end('</body>\n</html>'); }).listen(80); } This script is adapted from Node.js' documentation at http://nodejs.org/docs/latest/api/cluster.html PHP script <?php echo "<html>\n<head>\n<title>Speed test</title>\n</head>\n<body>\n"; for ($i = 0; $i < 10000; $i++) { echo "<p>Hello world</p>\n"; } echo "</body>\n</html>"; My results Node.js $ ab -n 500 -c 20 http://speedtest.dev/ This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking speedtest.dev (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Finished 500 requests Server Software: Server Hostname: speedtest.dev Server Port: 80 Document Path: / Document Length: 190070 bytes Concurrency Level: 20 Time taken for tests: 14.603 seconds Complete requests: 500 Failed requests: 0 Write errors: 0 Total transferred: 95066500 bytes HTML transferred: 95035000 bytes Requests per second: 34.24 [#/sec] (mean) Time per request: 584.123 [ms] (mean) Time per request: 29.206 [ms] (mean, across all concurrent requests) Transfer rate: 6357.45 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 2 Processing: 94 547 405.4 424 2516 Waiting: 0 331 399.3 216 2284 Total: 95 547 405.4 424 2516 Percentage of the requests served within a certain time (ms) 50% 424 66% 607 75% 733 80% 813 90% 1084 95% 1325 98% 1843 99% 2062 100% 2516 (longest request) PHP/Nginx $ ab -n 500 -c 20 http://speedtest.dev/test.php This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking speedtest.dev (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Finished 500 requests Server Software: nginx/1.0.13 Server Hostname: speedtest.dev Server Port: 80 Document Path: /test.php Document Length: 190070 bytes Concurrency Level: 20 Time taken for tests: 0.130 seconds Complete requests: 500 Failed requests: 0 Write errors: 0 Total transferred: 95109000 bytes HTML transferred: 95035000 bytes Requests per second: 3849.11 [#/sec] (mean) Time per request: 5.196 [ms] (mean) Time per request: 0.260 [ms] (mean, across all concurrent requests) Transfer rate: 715010.65 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 1 Processing: 3 5 0.7 5 7 Waiting: 1 4 0.7 4 7 Total: 3 5 0.7 5 7 Percentage of the requests served within a certain time (ms) 50% 5 66% 5 75% 5 80% 6 90% 6 95% 6 98% 6 99% 6 100% 7 (longest request) Additional details Again what I'm looking for is to find out if I'm doing something wrong with Node.js or if it is really just that slow compared to PHP on Nginx with FPM. I certainly think Node has a real niche that it could fit well, however with these test results (which I really hope I made a mistake with - as I like the idea of Node) lead me to believe that it is a horrible choice for even a modest processing load when compared to PHP (let alone JVM or various other fast solutions). As a final note, I also tried running an Apache Bench test against node with $ ab -n 20 -c 20 http://speedtest.dev/ and consistently received a total test time of greater than 0.900 seconds.

    Read the article

  • Windows Azure: Major Updates for Mobile Backend Development

    - by ScottGu
    This week we released some great updates to Windows Azure that make it significantly easier to develop mobile applications that use the cloud. These new capabilities include: Mobile Services: Custom API support Mobile Services: Git Source Control support Mobile Services: Node.js NPM Module support Mobile Services: A .NET API via NuGet Mobile Services and Web Sites: Free 20MB SQL Database Option for Mobile Services and Web Sites Mobile Notification Hubs: Android Broadcast Push Notification Support All of these improvements are now available to use immediately (note: some are still in preview).  Below are more details about them. Mobile Services: Custom APIs, Git Source Control, and NuGet Windows Azure Mobile Services provides the ability to easily stand up a mobile backend that can be used to support your Windows 8, Windows Phone, iOS, Android and HTML5 client applications.  Starting with the first preview we supported the ability to easily extend your data backend logic with server side scripting that executes as part of client-side CRUD operations against your cloud back data tables. With today’s update we are extending this support even further and introducing the ability for you to also create and expose Custom APIs from your Mobile Service backend, and easily publish them to your Mobile clients without having to associate them with a data table. This capability enables a whole set of new scenarios – including the ability to work with data sources other than SQL Databases (for example: Table Services or MongoDB), broker calls to 3rd party APIs, integrate with Windows Azure Queues or Service Bus, work with custom non-JSON payloads (e.g. Windows Periodic Notifications), route client requests to services back on-premises (e.g. with the new Windows Azure BizTalk Services), or simply implement functionality that doesn’t correspond to a database operation.  The custom APIs can be written in server-side JavaScript (using Node.js) and can use Node’s NPM packages.  We will also be adding support for custom APIs written using .NET in the future as well. Creating a Custom API Adding a custom API to an existing Mobile Service is super easy.  Using the Windows Azure Management Portal you can now simply click the new “API” tab with your Mobile Service, and then click the “Create a Custom API” button to create a new Custom API within it: Give the API whatever name you want to expose, and then choose the security permissions you’d like to apply to the HTTP methods you expose within it.  You can easily lock down the HTTP verbs to your Custom API to be available to anyone, only those who have a valid application key, only authenticated users, or administrators.  Mobile Services will then enforce these permissions without you having to write any code: When you click the ok button you’ll see the new API show up in the API list.  Selecting it will enable you to edit the default script that contains some placeholder functionality: Today’s release enables Custom APIs to be written using Node.js (we will support writing Custom APIs in .NET as well in a future release), and the Custom API programming model follows the Node.js convention for modules, which is to export functions to handle HTTP requests. The default script above exposes functionality for an HTTP POST request. To support a GET, simply change the export statement accordingly.  Below is an example of some code for reading and returning data from Windows Azure Table Storage using the Azure Node API: After saving the changes, you can now call this API from any Mobile Service client application (including Windows 8, Windows Phone, iOS, Android or HTML5 with CORS). Below is the code for how you could invoke the API asynchronously from a Windows Store application using .NET and the new InvokeApiAsync method, and data-bind the results to control within your XAML:     private async void RefreshTodoItems() {         var results = await App.MobileService.InvokeApiAsync<List<TodoItem>>("todos", HttpMethod.Get, parameters: null);         ListItems.ItemsSource = new ObservableCollection<TodoItem>(results);     }    Integrating authentication and authorization with Custom APIs is really easy with Mobile Services. Just like with data requests, custom API requests enjoy the same built-in authentication and authorization support of Mobile Services (including integration with Microsoft ID, Google, Facebook and Twitter authentication providers), and it also enables you to easily integrate your Custom API code with other Mobile Service capabilities like push notifications, logging, SQL, etc. Check out our new tutorials to learn more about to use new Custom API support, and starting adding them to your app today. Mobile Services: Git Source Control Support Today’s Mobile Services update also enables source control integration with Git.  The new source control support provides a Git repository as part your Mobile Service, and it includes all of your existing Mobile Service scripts and permissions. You can clone that git repository on your local machine, make changes to any of your scripts, and then easily deploy the mobile service to production using Git. This enables a really great developer workflow that works on any developer machine (Windows, Mac and Linux). To use the new support, navigate to the dashboard for your mobile service and select the Set up source control link: If this is your first time enabling Git within Windows Azure, you will be prompted to enter the credentials you want to use to access the repository: Once you configure this, you can switch to the configure tab of your Mobile Service and you will see a Git URL you can use to use your repository: You can use this URL to clone the repository locally from your favorite command line: > git clone https://scottgutodo.scm.azure-mobile.net/ScottGuToDo.git Below is the directory structure of the repository: As you can see, the repository contains a service folder with several subfolders. Custom API scripts and associated permissions appear under the api folder as .js and .json files respectively (the .json files persist a JSON representation of the security settings for your endpoints). Similarly, table scripts and table permissions appear as .js and .json files, but since table scripts are separate per CRUD operation, they follow the naming convention of <tablename>.<operationname>.js. Finally, scheduled job scripts appear in the scheduler folder, and the shared folder is provided as a convenient location for you to store code shared by multiple scripts and a few miscellaneous things such as the APNS feedback script. Lets modify the table script todos.js file so that we have slightly better error handling when an exception occurs when we query our Table service: todos.js tableService.queryEntities(query, function(error, todoItems){     if (error) {         console.error("Error querying table: " + error);         response.send(500);     } else {         response.send(200, todoItems);     }        }); Save these changes, and now back in the command line prompt commit the changes and push them to the Mobile Services: > git add . > git commit –m "better error handling in todos.js" > git push Once deployment of the changes is complete, they will take effect immediately, and you will also see the changes be reflected in the portal: With the new Source Control feature, we’re making it really easy for you to edit your mobile service locally and push changes in an atomic fashion without sacrificing ease of use in the Windows Azure Portal. Mobile Services: NPM Module Support The new Mobile Services source control support also allows you to add any Node.js module you need in the scripts beyond the fixed set provided by Mobile Services. For example, you can easily switch to use Mongo instead of Windows Azure table in our example above. Set up Mongo DB by either purchasing a MongoLab subscription (which provides MongoDB as a Service) via the Windows Azure Store or set it up yourself on a Virtual Machine (either Windows or Linux). Then go the service folder of your local git repository and run the following command: > npm install mongoose This will add the Mongoose module to your Mobile Service scripts.  After that you can use and reference the Mongoose module in your custom API scripts to access your Mongo database: var mongoose = require('mongoose'); var schema = mongoose.Schema({ text: String, completed: Boolean });   exports.get = function (request, response) {     mongoose.connect('<your Mongo connection string> ');     TodoItemModel = mongoose.model('todoitem', schema);     TodoItemModel.find(function (err, items) {         if (err) {             console.log('error:' + err);             return response.send(500);         }         response.send(200, items);     }); }; Don’t forget to push your changes to your mobile service once you are done > git add . > git commit –m "Switched to use Mongo Labs" > git push Now our Mobile Service app is using Mongo DB! Note, with today’s update usage of custom Node.js modules is limited to Custom API scripts only. We will enable it in all scripts (including data and custom CRON tasks) shortly. New Mobile Services NuGet package, including .NET 4.5 support A few months ago we announced a new pre-release version of the Mobile Services client SDK based on portable class libraries (PCL). Today, we are excited to announce that this new library is now a stable .NET client SDK for mobile services and is no longer a pre-release package. Today’s update includes full support for Windows Store, Windows Phone 7.x, and .NET 4.5, which allows developers to use Mobile Services from ASP.NET or WPF applications. You can install and use this package today via NuGet. Mobile Services and Web Sites: Free 20MB Database for Mobile Services and Web Sites Starting today, every customer of Windows Azure gets one Free 20MB database to use for 12 months free (for both dev/test and production) with Web Sites and Mobile Services. When creating a Mobile Service or a Web Site, simply chose the new “Create a new Free 20MB database” option to take advantage of it: You can use this free SQL Database together with the 10 free Web Sites and 10 free Mobile Services you get with your Windows Azure subscription, or from any other Windows Azure VM or Cloud Service. Notification Hubs: Android Broadcast Push Notification Support Earlier this year, we introduced a new capability in Windows Azure for sending broadcast push notifications at high scale: Notification Hubs. In the initial preview of Notification Hubs you could use this support with both iOS and Windows devices.  Today we’re excited to announce new Notification Hubs support for sending push notifications to Android devices as well. Push notifications are a vital component of mobile applications.  They are critical not only in consumer apps, where they are used to increase app engagement and usage, but also in enterprise apps where up-to-date information increases employee responsiveness to business events.  You can use Notification Hubs to send push notifications to devices from any type of app (a Mobile Service, Web Site, Cloud Service or Virtual Machine). Notification Hubs provide you with the following capabilities: Cross-platform Push Notifications Support. Notification Hubs provide a common API to send push notifications to iOS, Android, or Windows Store at once.  Your app can send notifications in platform specific formats or in a platform-independent way.  Efficient Multicast. Notification Hubs are optimized to enable push notification broadcast to thousands or millions of devices with low latency.  Your server back-end can fire one message into a Notification Hub, and millions of push notifications can automatically be delivered to your users.  Devices and apps can specify a number of per-user tags when registering with a Notification Hub. These tags do not need to be pre-provisioned or disposed, and provide a very easy way to send filtered notifications to an infinite number of users/devices with a single API call.   Extreme Scale. Notification Hubs enable you to reach millions of devices without you having to re-architect or shard your application.  The pub/sub routing mechanism allows you to broadcast notifications in a super-efficient way.  This makes it incredibly easy to route and deliver notification messages to millions of users without having to build your own routing infrastructure. Usable from any Backend App. Notification Hubs can be easily integrated into any back-end server app, whether it is a Mobile Service, a Web Site, a Cloud Service or an IAAS VM. It is easy to configure Notification Hubs to send push notifications to Android. Create a new Notification Hub within the Windows Azure Management Portal (New->App Services->Service Bus->Notification Hub): Then register for Google Cloud Messaging using https://code.google.com/apis/console and obtain your API key, then simply paste that key on the Configure tab of your Notification Hub management page under the Google Cloud Messaging Settings: Then just add code to the OnCreate method of your Android app’s MainActivity class to register the device with Notification Hubs: gcm = GoogleCloudMessaging.getInstance(this); String connectionString = "<your listen access connection string>"; hub = new NotificationHub("<your notification hub name>", connectionString, this); String regid = gcm.register(SENDER_ID); hub.register(regid, "myTag"); Now you can broadcast notification from your .NET backend (or Node, Java, or PHP) to any Windows Store, Android, or iOS device registered for “myTag” tag via a single API call (you can literally broadcast messages to millions of clients you have registered with just one API call): var hubClient = NotificationHubClient.CreateClientFromConnectionString(                   “<your connection string with full access>”,                   "<your notification hub name>"); hubClient.SendGcmNativeNotification("{ 'data' : {'msg' : 'Hello from Windows Azure!' } }", "myTag”); Notification Hubs provide an extremely scalable, cross-platform, push notification infrastructure that enables you to efficiently route push notification messages to millions of mobile users and devices.  It will make enabling your push notification logic significantly simpler and more scalable, and allow you to build even better apps with it. Learn more about Notification Hubs here on MSDN . Summary The above features are now live and available to start using immediately (note: some of the services are still in preview).  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • CodePlex Daily Summary for Thursday, November 24, 2011

    CodePlex Daily Summary for Thursday, November 24, 2011Popular ReleasesASP.NET Comet Ajax Library (Reverse Ajax - Server Push): ASP.NET Reverse Ajax Samples: Chat, MVC Razor, DesktopClient, Reverse Ajax for VB.NET and C#Windows Azure SDK for PHP: Windows Azure SDK for PHP v4.0.5: INSTALLATION Windows Azure SDK for PHP requires no special installation steps. Simply download the SDK, extract it to the folder you would like to keep it in, and add the library directory to your PHP include_path. INSTALLATION VIA PEAR Maarten Balliauw provides an unofficial PEAR channel via http://www.pearplex.net. Here's how to use it: New installation: pear channel-discover pear.pearplex.net pear install pearplex/PHPAzure Or if you've already installed PHPAzure before: pear upgrade p...Anno 2070 Assistant: Beta v1.0 (STABLE): Anno 2070 Assistant Beta v1.0 Released! Features Included: Complete Building Layouts for Ecos, Tycoons & Techs Complete Production Chains for Ecos, Tycoons & Techs Completed Credits Screen Known Issues: Not all production chains and building layouts may be on the lists because they have not yet been discovered. However, data is still 99.9% complete. Currently the Supply & Demand, including Calculator screen are disabled until version 1.1.Minemapper: Minemapper v0.1.7: Including updated Minecraft Biome Extractor and mcmap to support the new Minecraft 1.0.0 release (new block types, etc).Metro Pandora: Metro Pandora SDK V1: For more information on this release please see Metro Pandora SDK Introduction. Supported platforms: Windows Phone 7 / Silverlight Windows 8 .Net 4.0, WPF, WinformsVisual Leak Detector for Visual C++ 2008/2010: v2.2.1: Enhancements: * strdup and _wcsdup functions support added. * Preliminary support for VS 11 added. Bugs Fixed: * Low performance after upgrading from VLD v2.1. * Memory leaks with static linking fixed (disabled calloc support). * Runtime error R6002 fixed because of wrong memory dump format. * version.h fixed in installer. * Some PVS studio warning fixed.NetSqlAzMan - .NET SQL Authorization Manager: 3.6.0.10: 3.6.0.10 22-Nov-2011 Update: Removed PreEmptive Platform integration (PreEmptive analytics) Removed all PreEmptive attributes Removed PreEmptive.dll assembly references from all projects Added first support to ADAM/AD LDS Thanks to PatBea. Work Item 9775: http://netsqlazman.codeplex.com/workitem/9775Developer Team Article System Management: DTASM v1.3: ?? ??? ???? 3 ????? ???? ???? ????? ??? : - ????? ?????? ????? ???? ?? ??? ???? ????? ?? ??? ? ?? ???? ?????? ???? ?? ???? ????? ?? . - ??? ?? ???? ????? ???? ????? ???? ???? ?? ????? , ?????? ????? ????? ?? ??? . - ??? ??????? ??? ??? ???? ?? ????? ????? ????? .VideoLan DotNet for WinForm, WPF & Silverlight 5: VideoLan DotNet for WinForm, WPF, SL5 - 2011.11.22: The new version contains Silverlight 5 library: Vlc.DotNet.Silverlight. A sample could be tested here The new version add and correct many features : Correction : Reinitialize some variables Deprecate : Logging API, since VLC 1.2 (08/20/2011) Add subitem in LocationMedia (for Youtube videos, ...) Update Wpf sample to use Youtube videos Many others correctionsSharePoint 2010 FBA Pack: SharePoint 2010 FBA Pack 1.2.0: Web parts are now fully customizable via html templates (Issue #323) FBA Pack is now completely localizable using resource files. Thank you David Chen for submitting the code as well as Chinese translations of the FBA Pack! The membership request web part now gives the option of having the user enter the password and removing the captcha (Issue # 447) The FBA Pack will now work in a zone that does not have FBA enabled (Another zone must have FBA enabled, and the zone must contain the me...SharePoint 2010 Education Demo Project: Release SharePoint SP1 for Education Solutions: This release includes updates to the Content Packs for SharePoint SP1. All Content Packs have been updated to install successfully under SharePoint SP1SQL Monitor - managing sql server performance: SQLMon 4.1 alpha 6: 1. improved support for schema 2. added find reference when right click on object list 3. added object rename supportBugNET Issue Tracker: BugNET 0.9.126: First stable release of version 0.9. Upgrades from 0.8 are fully supported and upgrades to future releases will also be supported. This release is now compiled against the .NET 4.0 framework and is a requirement. Because of this the web.config has significantly changed. After upgrading, you will need to configure the authentication settings for user registration and anonymous access again. Please see our installation / upgrade instructions for more details: http://wiki.bugnetproject.c...Free SharePoint 2010 Sites Templates: SharePoint Server 2010 Sites Templates: here is the list of sites templates to be downloadedVsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 30 Beta: Note: This release does not work with custom VsTortoise toolbars. These get removed every time when you shutdown Visual Studio. (#7940) Build 30 (beta)New: Support for TortoiseSVN 1.7 added. (the download contains both setups, for TortoiseSVN 1.6 and 1.7) New: OpenModifiedDocumentDialog displays conflicted files now. New: OpenModifiedDocument allows to group items by changelist now. Fix: OpenModifiedDocumentDialog caused Visual Studio 2010 to freeze sometimes. Fix: The installer didn...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.30: Highlight features & improvements: • Performance optimization. • Back in stock notifications. • Product special price support. • Catalog mode (based on customer role) To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).WPF Converters: WPF Converters V1.2.0.0: support for enumerations, value types, and reference types in the expression converter's equality operators the expression converter now handles DependencyProperty.UnsetValue as argument values correctly (#4062) StyleCop conformance (more or less)Json.NET: Json.NET 4.0 Release 4: Change - JsonTextReader.Culture is now CultureInfo.InvariantCulture by default Change - KeyValurPairConverter no longer cares about the order of the key and value properties Change - Time zone conversions now use new TimeZoneInfo instead of TimeZone Fix - Fixed boolean values sometimes being capitalized when converting to XML Fix - Fixed error when deserializing ConcurrentDictionary Fix - Fixed serializing some Uris returning the incorrect value Fix - Fixed occasional error when...Media Companion: MC 3.423b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Replaced 'Rebuild' with 'Refresh' throughout entire code. Rebuild will now be known as Refresh. mc_com.exe has been fully updated TV Show Resolutions... Resolved issue #206 - having to hit save twice when updating runtime manually Shrunk cache size and lowered loading times f...ASP.net Awesome jQuery Ajax Controls Samples and Tutorials: 1.0 samples: Demos and Tutorials for ASP.net Awesome VS2008 are in .NET 3.5 VS2010 are in .NET 4.0 (demos for the ASP.net Awesome jQuery Ajax Controls)New Projects"My" Search SharePoint 2010 WebParts: Solution consists of two SharePoint 2010 web parts inheriting from core results web part. The "My Core Search Results" webpart and the "Sortable Results" web part.Alerta Mensagens: Projeto de alerta de mensagens.Analyzer GT3000: VU MIF PS1 "Utopine Amnezija" PSI projektasAutoReservation: Miniprojekt an der HSR im Fach MsTechContestMeter: Program for measuring participant scores in programming contests.CountriesWFA: oby ostatniDotNet.Framework.Common: <DotNet.Framework.Common> ASP.NET?????? Esaan Windows Phone 7 Application Compitition: Resource for Esaan Windows Phone 7 Application Excel add-in to enable users View Azure Storage Tables: An Excel 2007 add-in using Azure Storage REST APIs (no Azure libraries required) and Excel custom task pane. Uses of such Office 2007 add-in: 1. Read data from Azure storage, populate that in excel and use it by sorting (not directly available in Azure storage) and so on. 2. Write Sorted data back (in sorted order) back to Azure storage. 3. Have data used frequently (such as dictionary of legal terms, or math formulae, and so on) on Azure storage and let your Office Excel users use th...fastconv: fastconv, a fast charset encoding convertor.gaosu: this is a gaosu projectGems: Gossip-enabled monitoring system.GridIT: GridIT is a Puzzle written in Visual Basic.Habanero Faces: A set of user interface libraries for use with Habanero Core used to build Windows Forms or Visual WebGUI user interfaces for your business objects. HomeSite: HomeSiteHTML to docx Converter: This converts HTML into Word documents (docx format). The code is written in PHP and works with PHPWord.Image Curator: image curation programjLemon: jLemon is an LALR(1) parser generator. Lemon is similar to the much more famous programs "YACC", "BISON" and "LEMON". jLemon is a pure Java program and compatible with "LEMON".jngsoftware: JNG Computer ServicesKinectoid: Kinectoid is a Kinect based pong game based on Neat Game Engine.Linq Accelerator: coming soon!Linq to SSRS (SQL Server Reporting Services): Linq to SSRS allows you as developer generate objects and map them to the repots on SQL Server Reporting Services in linq to sql fashion using lambda expressions and linq notation.MicroBitmap - Image Compressor: MicroBitmap is a image compressor which is different from all other compressors This compressor is focussed at compressing the bitmap and making it so small as possible so fast as possible You can find more information in the source codeOMX_AL_test_environment: OMX application layer simulation/testing environmentShuriken Plugins: Shuriken Plugins is a project for developing plugins for Shuriken, the free launcher app on Windows. Simple Command Line Backup Tool: Simple Command Line Backup Tool automates the backup of files from the command line for use in batch files. Includes file type filters/ recursive/non recursive and number days to keep backups for. Right now this is simply a quick app I created for a client of mine that needed a simple backup utility to go with a product I delivered. .NET 2.0 C# Console Application Hopefully this will morph into a useful utility with features not common in other command line backup utilities.SomethingSpacial: Initially designed via Interact Designer Ariel (http://www.facingblend.com/) Something Spacial as a design was used to help give some look and feel for the Silverlight User Group Starter Kit and then finally used as part of the Seattle Silverlight User Group Community Site.SqlServer Packer: SqlServer???????????????????,?????????。TVDBMetaData: Construct TV Series and Episode XML metadata from TheTvDB database.Ukázkové projekty: Obsahuje ukázkové projekty uživatele TenCoKaciStromy.Uzing Inklude: UI = Uzing + Inklude Uzing : Utility classes for .Net written in C# Inklude : Utility classes written in native C++ It supports very low level communications, but in very simple way. Current Capability - TCP, UDP, Thread in C# & C++ - String in C++ - Shared memory in C# - Communication between C# and C++ via shared memory Dependency: - TBB (http://threadingbuildingblocks.org/) - Boost (http://www.boost.org/) Even though it depends on such heavy libs, end developer does...Visual Studio Project Converter: This project is inspired by the original source code provided by Emmet Gray to convert Visual Studio solution and project files from one version to another (both backwards and forwards). This project has been converted to C# and updated to support new features.WarrantyPrint: WarrantyPrintWordPress???? on Windows Azure: WordPress?????Windows Azure Platform????????。 ???????SQL Azure??????。WPFResumeVideo: Play video on any computer, and resume where you left offWyvern's Depot: Personal code repository.

    Read the article

  • Graphics.MeasureCharacterRanges giving wrong size calculations in C#.Net?

    - by Owen Blacker
    I'm trying to render some text into a specific part of an image in a Web Forms app. The text will be user entered, so I want to vary the font size to make sure it fits within the bounding box. I have code that was doing this fine on my proof-of-concept implementation, but I'm now trying it against the assets from the designer, which are larger, and I'm getting some odd results. I'm running the size calculation as follows: StringFormat fmt = new StringFormat(); fmt.Alignment = StringAlignment.Center; fmt.LineAlignment = StringAlignment.Near; fmt.FormatFlags = StringFormatFlags.NoClip; fmt.Trimming = StringTrimming.None; int size = __startingSize; Font font = __fonts.GetFontBySize(size); while (GetStringBounds(text, font, fmt).IsLargerThan(__textBoundingBox)) { context.Trace.Write("MyHandler.ProcessRequest", "Decrementing font size to " + size + ", as size is " + GetStringBounds(text, font, fmt).Size() + " and limit is " + __textBoundingBox.Size()); size--; if (size < __minimumSize) { break; } font = __fonts.GetFontBySize(size); } context.Trace.Write("MyHandler.ProcessRequest", "Writing " + text + " in " + font.FontFamily.Name + " at " + font.SizeInPoints + "pt, size is " + GetStringBounds(text, font, fmt).Size() + " and limit is " + __textBoundingBox.Size()); I then use the following line to render the text onto an image I'm pulling from the filesystem: g.DrawString(text, font, __brush, __textBoundingBox, fmt); where: __fonts is a PrivateFontCollection, PrivateFontCollection.GetFontBySize is an extension method that returns a FontFamily RectangleF __textBoundingBox = new RectangleF(150, 110, 212, 64); int __minimumSize = 8; int __startingSize = 48; Brush __brush = Brushes.White; int size starts out at 48 and decrements within that loop Graphics g has SmoothingMode.AntiAlias and TextRenderingHint.AntiAlias set context is a System.Web.HttpContext (this is an excerpt from the ProcessRequest method of an IHttpHandler) The other methods are: private static RectangleF GetStringBounds(string text, Font font, StringFormat fmt) { CharacterRange[] range = { new CharacterRange(0, text.Length) }; StringFormat myFormat = fmt.Clone() as StringFormat; myFormat.SetMeasurableCharacterRanges(range); using (Graphics g = Graphics.FromImage(new Bitmap( (int) __textBoundingBox.Width - 1, (int) __textBoundingBox.Height - 1))) { g.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias; g.TextRenderingHint = System.Drawing.Text.TextRenderingHint.AntiAlias; Region[] regions = g.MeasureCharacterRanges(text, font, __textBoundingBox, myFormat); return regions[0].GetBounds(g); } } public static string Size(this RectangleF rect) { return rect.Width + "×" + rect.Height; } public static bool IsLargerThan(this RectangleF a, RectangleF b) { return (a.Width > b.Width) || (a.Height > b.Height); } Now I have two problems. Firstly, the text sometimes insists on wrapping by inserting a line-break within a word, when it should just fail to fit and cause the while loop to decrement again. I can't see why it is that Graphics.MeasureCharacterRanges thinks that this fits within the box when it shouldn't be word-wrapping within a word. This behaviour is exhibited irrespective of the character set used (I get it in Latin alphabet words, as well as other parts of the Unicode range, like Cyrillic, Greek, Georgian and Armenian). Is there some setting I should be using to force Graphics.MeasureCharacterRanges only to be word-wrapping at whitespace characters (or hyphens)? This first problem is the same as post 2499067. Secondly, in scaling up to the new image and font size, Graphics.MeasureCharacterRanges is giving me heights that are wildly off. The RectangleF I am drawing within corresponds to a visually apparent area of the image, so I can easily see when the text is being decremented more than is necessary. Yet when I pass it some text, the GetBounds call is giving me a height that is almost double what it's actually taking. Using trial and error to set the __minimumSize to force an exit from the while loop, I can see that 24pt text fits within the bounding box, yet Graphics.MeasureCharacterRanges is reporting that the height of that text, once rendered to the image, is 122px (when the bounding box is 64px tall and it fits within that box). Indeed, without forcing the matter, the while loop iterates to 18pt, at which point Graphics.MeasureCharacterRanges returns a value that fits. The trace log excerpt is as follows: Decrementing font size to 24, as size is 193×122 and limit is 212×64 Decrementing font size to 23, as size is 191×117 and limit is 212×64 Decrementing font size to 22, as size is 200×75 and limit is 212×64 Decrementing font size to 21, as size is 192×71 and limit is 212×64 Decrementing font size to 20, as size is 198×68 and limit is 212×64 Decrementing font size to 19, as size is 185×65 and limit is 212×64 Writing VENNEGOOR of HESSELINK in DIN-Black at 18pt, size is 178×61 and limit is 212×64 So why is Graphics.MeasureCharacterRanges giving me a wrong result? I could understand it being, say, the line height of the font if the loop stopped around 21pt (which would visually fit, if I screenshot the results and measure it in Paint.Net), but it's going far further than it should be doing because, frankly, it's returning the wrong damn results. Any and all help gratefully received. Thanks!

    Read the article

  • PASS: Bylaw Change 2013

    - by Bill Graziano
    PASS launched a Global Growth Initiative in the Summer of 2011 with the appointment of three international Board advisors.  Since then we’ve thought and talked extensively about how we make PASS more relevant to our members outside the US and Canada.  We’ve collected much of that discussion in our Global Growth site.  You can find vision documents, plans, governance proposals, feedback sites, and transcripts of Twitter chats and town hall meetings.  We also address these plans at the Board Q&A during the 2012 Summit. One of the biggest changes coming out of this process is around how we elect Board members.  And that requires a change to the bylaws.  We published the proposed bylaw changes as a red-lined document so you can clearly see the changes.  Our goal in these bylaw changes was to address the changes required by the global growth initiatives, conduct a legal review of the document and address other minor issues in the document.  There are numerous small wording changes throughout the document.  For example, we replaced every reference of “The Corporation” with the word “PASS” so it now reads “PASS is organized…”. Board Composition The biggest change in these bylaw changes is how the Board is composed and elected.  This discussion starts in section VI.2.  This section now says that some elected directors will come from geographic regions.  I think this is the best way to make sure we give all of our members a voice in the leadership of the organization.  The key parts of this section are: The remaining Directors (i.e. the non-Officer Directors and non-Vendor Appointed Directors) shall be elected by the voting membership (“Elected Directors”). Elected Directors shall include representatives of defined PASS regions (“Regions”) as set forth below (“Regional Directors”) and at minimum one (1) additional Director-at-Large whose selection is not limited by region. Regional Directors shall include, but are not limited to, two (2) seats for the Region covering Canada and the United States of America. Additional Regions for the purpose of electing additional Regional Directors and additional Director-at-Large seats for the purpose of expanding the Board shall be defined by a majority vote of the current Board of Directors and must be established prior to the public call for nominations in the general election. Previously defined Regions and seats approved by the Board of Directors shall remain in effect and can only be modified by a 2/3 majority vote by the then current Board of Directors. Currently PASS has six At-Large Directors elected by the members.  These changes allow for a Regional Director position that is elected by the members but must come from a particular region.  It also stipulates that there must always be at least one Director-at-Large who can come from any region. We also understand that PASS is currently a very US-centric organization.  Our Summit is held in America, roughly half our chapters are in the US and Canada and most of the Board members over the last ten years have come from America.  We wanted to reflect that by making sure that our US and Canadian volunteers would continue to play a significant role by ensuring that two Regional seats are reserved specifically for Canada and the US. Other than that, the bylaws don’t create any specific regional seats.  These rules allow us to create Regional Director seats but don’t require it.  We haven’t fully discussed what the criteria will be in order for a region to have a seat designated for it or how many regions there will be.  In our discussions we’ve broadly discussed regions for United States and Canada Europe, Middle East, and Africa (EMEA) Australia, New Zealand and Asia (also known as Asia Pacific or APAC) Mexico, South America, and Central America (LATAM) As you can see, our thinking is that there will be a few large regions.  I’ve also considered a non-North America region that we can gradually split into the regions above as our membership grows in those areas.  The regions will be defined by a policy document that will be published prior to the elections. I’m hoping that over the next year we can begin to publish more of what we do as Board-approved policy documents. While the bylaws only require a single non-region specific At-large Director, I would expect we would always have two.  That way we can have one in each election.  I think it’s important that we always have one seat open that anyone who is eligible to run for the Board can contest.  The Board is required to have any regions defined prior to the start of the election process. Board Elections – Regional Seats We spent a lot of time discussing how the elections would work for these Regional Director seats.  Ultimately we decided that the simplest solution is that every PASS member should vote for every open seat.  Section VIII.3 reads: Candidates who are eligible (i.e. eligible to serve in such capacity subject to the criteria set forth herein or adopted by the Board of Directors) shall be designated to fill open Board seats in the following order of priority on the basis of total votes received: (i) full term Regional Director seats, (ii) full term Director-at-Large seats, (iii) not full term (vacated) Regional Director seats, (iv) not full term (vacated) Director-at-Large seats. For the purposes of clarity, because of eligibility requirements, it is contemplated that the candidates designated to the open Board seats may not receive more votes than certain other candidates who are not selected to the Board. We debated whether to have multiple ballots or one single ballot.  Multiple ballot elections get complicated quickly.  Let’s say we have a ballot for US/Canada and one for Region 2.  After that we’d need a mechanism to merge those two together and come up with the winner of the at-large seat or have another election for the at-large position.  We think the best way to do this is a single ballot and putting the highest vote getters into the most restrictive seats.  Let’s look at an example: There are seats open for Region 1, Region 2 and at-large.  The election results are as follows: Candidate A (eligible for Region 1) – 550 votes Candidate B (eligible for Region 1) – 525 votes Candidate C (eligible for Region 1) – 475 votes Candidate D (eligible for Region 2) – 125 votes Candidate E (eligible for Region 2) – 75 votes In this case, Candidate A is the winner for Region 1 and is assigned that seat.  Candidate D is the winner for Region 2 and is assigned that seat.  The at-large seat is filled by the high remaining vote getter which is Candidate B. The key point to understand is that we may have a situation where a person with a lower vote total is elected to a regional seat and a person with a higher vote total is excluded.  This will be true whether we had multiple ballots or a single ballot.  Board Elections – Vacant Seats The other change to the election process is for vacant Board seats.  The actual changes are sprinkled throughout the document. Previously we didn’t have a mechanism that allowed for an election of a Board seat that we knew would be vacant in the future.  The most common case is when a Board members moves to an Officer role in the middle of their term.  One of the key changes is to allow the number of votes members have to match the number of open seats.  This allows each voter to express their preference on all open seats.  This only applies when we know about the opening prior to the call for nominations.  This all means that if there’s a seat will be open at the start of the next Board term, and we know about it prior to the call for nominations, we can include that seat in the elections.  Ultimately, the aim is to have PASS members decide who sits on the Board in as many situations as possible. We discussed the option of changing the bylaws to just take next highest vote-getter in all other cases.  I think that’s wrong for the following reasons: All voters aren’t able to express an opinion on all candidates.  If there are five people running for three seats, you can only vote for three.  You have no way to express your preference between #4 and #5. Different candidates may have different information about the number of seats available.  A person may learn that a Board member plans to resign at the end of the year prior to that information being made public. They may understand that the top four vote getters will end up on the Board while the rest of the members believe there are only three openings.  This may affect someone’s decision to run.  I don’t think this creates a transparent, fair election. Board members may use their knowledge of the election results to decide whether to remain on the Board or not.  Admittedly this one is unlikely but I don’t want to create a situation where this accusation can be leveled. I think the majority of vacancies in the future will be handled through elections.  The bylaw section quoted above also indicates that partial term vacancies will be filled after the full term seats are filled. Removing Directors Section VI.7 on removing directors has always had a clause that allowed members to remove an elected director.  We also had a clause that allowed appointed directors to be removed.  We added a clause that allows the Board to remove for cause any director with a 2/3 majority vote.  The updated text reads: Any Director may be removed for cause by a 2/3 majority vote of the Board of Directors whenever in its judgment the best interests of PASS would be served thereby. Notwithstanding the foregoing, the authority of any Director to act as in an official capacity as a Director or Officer of PASS may be suspended by the Board of Directors for cause. Cause for suspension or removal of a Director shall include but not be limited to failure to meet any Board-approved performance expectations or the presence of a reason for suspension or dismissal as listed in Addendum B of these Bylaws. The first paragraph is updated and the second and third are unchanged (except cleaning up language).  If you scroll down and look at Addendum B of these bylaws you find the following: Cause for suspension or dismissal of a member of the Board of Directors may include: Inability to attend Board meetings on a regular basis. Inability or unwillingness to act in a capacity designated by the Board of Directors. Failure to fulfill the responsibilities of the office. Inability to represent the Region elected to represent Failure to act in a manner consistent with PASS's Bylaws and/or policies. Misrepresentation of responsibility and/or authority. Misrepresentation of PASS. Unresolved conflict of interests with Board responsibilities. Breach of confidentiality. The bold line about your inability to represent your region is what we added to the bylaws in this revision.  We also added a clause to section VII.3 allowing the Board to remove an officer.  That clause is much less restrictive.  It doesn’t require cause and only requires a simple majority. The Board of Directors may remove any Officer whenever in their judgment the best interests of PASS shall be served by such removal. Other There are numerous other small changes throughout the document. Proxy voting.  The laws around how members and Board members proxy votes are specific in Illinois law.  PASS is an Illinois corporation and is subject to Illinois laws.  We changed section IV.5 to come into compliance with those laws.  Specifically this says you can only vote through a proxy if you have a written proxy through your authorized attorney.  English language proficiency.  As we increase our global footprint we come across more members that aren’t native English speakers.  The business of PASS is conducted in English and it’s important that our Board members speak English.  If we get big enough to afford translators, we may be able to relax this but right now we need English language skills for effective Board members. Committees.  The language around committees in section IX is old and dated.  Our lawyers advised us to clean it up.  This section specifically applies to any committees that the Board may form outside of portfolios.  We removed the term limits, quorum and vacancies clause.  We don’t currently have any committees that this would apply to.  The Nominating Committee is covered elsewhere in the bylaws. Electronic Votes.  The change allows the Board to vote via email but the results must be unanimous.  This is to conform with Illinois state law. Immediate Past President.  There was no mechanism to fill the IPP role if an outgoing President chose not to participate.  We changed section VII.8 to allow the Board to invite any previous President to fill the role by majority vote. Nominations Committee.  We’ve opened the language to allow for the transparent election of the Nominations Committee as outlined by the 2011 Election Review Committee. Revocation of Charters. The language surrounding the revocation of charters for local groups was flagged by the lawyers. We have allowed for the local user group to make all necessary payment before considering returning of items to PASS if required. Bylaw notification. We’ve spent countless meetings working on these bylaws with the intent to not open them again any time in the near future. Should the bylaws be opened again, we have included a clause ensuring that the PASS membership is involved. I’m proud that the Board has remained committed to transparency and accountability to members. This clause will require that same level of commitment in the future even when all the current Board members have rolled off. I think that covers everything.  I’d encourage you to look through the red-line document and see the changes.  It’s helpful to look at the language that’s being removed and the language that’s being added.  I’m happy to answer any questions here or you can email them to [email protected].

    Read the article

  • Self-relation messes up contents in fetching

    - by holographix
    Hi folks, I'm dealing with an annoying problem in core data I've got a table named Character, which is made as follows I'm filling the table in various steps: 1) fill the attributes of the table 2) fill the Character Relation (charRel) FYI charRel is defined as follows I'm feeding the contents by pulling the data from an xml, the feeding code is this curStr = [[NSMutableString stringWithString:[curStr stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]] retain]; NSLog(@"Parsing relation within these keys %@, in order to get'em associated",curStr); NSArray *chunks = [curStr componentsSeparatedByString: @","]; for( NSString *relId in chunks ) { NSLog(@"Associating %@ with id %@",[currentCharacter valueForKey:@"character_id"], relId); NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"character_id == %@", relId]; [request setEntity:[NSEntityDescription entityForName:@"Character" inManagedObjectContext:[self managedObjectContext] ]]; [request setPredicate:predicate]; NSerror *error = nil; NSArray *results = [[self managedObjectContext] executeFetchRequest:request error:&error]; // error handling code if(error != nil) { NSLog(@"[SYMBOL CORRELATION]: retrieving correlated symbol error: %@", [error localizedDescription]); } else if([results count] > 0) { Character *relatedChar = [results objectAtIndex:0]; // grab the first result in the stack, could be done better! [currentCharacter addCharRelObject:relatedChar]; //VICE VERSA RELATIONS NSArray *charRels = [relatedChar valueForKey:@"charRel"]; BOOL alreadyRelated = NO; for(Character *charRel in charRels) { if([[charRel valueForKey:@"character_id"] isEqual:[currentCharacter valueForKey:@"character_id"]]) { alreadyRelated = YES; break; } } if(!alreadyRelated) { NSLog(@"\n\t\trelating %@ with %@", [relatedChar valueForKey:@"character_id"], [currentCharacter valueForKey:@"character_id"]); [relatedChar addCharRelObject:currentCharacter]; } } else { NSLog(@"[SYMBOL CORRELATION]: related symbol was not found! ##SKIPPING-->"); } [request release]; } NSLog(@"\t\t### TOTAL OF REALTIONS FOR ID %@: %d\n%@", [currentCharacter valueForKey:@"character_id"], [[currentCharacter valueForKey:@"charRel"] count], currentCharacter); error = nil; /* SAVE THE CONTEXT */ if (![managedObjectContext save:&error]) { NSLog(@"Whoops, couldn't save the symbol record: %@", [error localizedDescription]); NSArray* detailedErrors = [[error userInfo] objectForKey:NSDetailedErrorsKey]; if(detailedErrors != nil && [detailedErrors count] > 0) { for(NSError* detailedError in detailedErrors) { NSLog(@"\n################\t\tDetailedError: %@\n################", [detailedError userInfo]); } } else { NSLog(@" %@", [error userInfo]); } } at this point when I print out the values of the currentCharacter, everything looks perfect. every relation is in its place. in example in this log we can clearly see that this element has got 3 items in charRel: <Character: 0x5593af0> (entity: Character; id: 0x55938c0 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p86> ; data: { catRel = "<relationship fault: 0x9a29db0 'catRel'>"; charRel = ( "0x9a1f870 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p74>", "0x9a14bd0 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p109>", "0x558ba00 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p5>" ); "character_id" = 254; examplesRel = "<relationship fault: 0x9a29df0 'examplesRel'>"; meaning = "\n Left"; pinyin = "\n zu\U01d2"; "pronunciation_it" = "\n zu\U01d2"; strokenumber = 5; text = "\n \n <p>The most ancient form of this symbol"; unicodevalue = "\n \U5de6"; }) then when I'm in need of retrieving this item I perform an extraction, like this: // at first I get the single Character record NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSError *error; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"character_id == %@", self.char_id ]; [request setEntity:[NSEntityDescription entityForName:@"Character" inManagedObjectContext:_context ]]; [request setPredicate:predicate]; NSArray *fetchedObjs = [_context executeFetchRequest:request error:&error]; when, for instance, I print out in NSLog the contents of charRel NSArray *correlations = [singleCharacter valueForKey:@"charRel"]; NSLog(@"CHARACTER OBJECT \n%@", correlations); I get this Relationship fault for (<NSRelationshipDescription: 0x5568520>), name charRel, isOptional 1, isTransient 0, entity Character, renamingIdentifier charRel, validation predicates (), warnings (), versionHashModifier (null), destination entity Character, inverseRelationship (null), minCount 1, maxCount 99 on 0x6937f00 hope that I made myself clear. this thing is driving me insane, I've googled all over world, but I couldn't find a solution (and this make me think to as issue related to bad coding somehow :P). thank you in advance guys. k

    Read the article

  • Accessing local variable doesn't improve performance

    - by NicMagnier
    The short version Why is this code: var index = (Math.floor(y / scale) * img.width + Math.floor(x / scale)) * 4; More performant than this one? var index = Math.floor(ref_index) * 4; The long version This week, the author of Impact js published an article about some rendering issue: http://www.phoboslab.org/log/2012/09/drawing-pixels-is-hard In the article there was the source of a function to scale an image by accessing pixels in the canvas. I wanted to suggest some traditional ways to optimize this kind of code so that the scaling would be shorter at loading time. But after testing it my result was most of the time worst that the original function. Guessing this was the JavaScript engine that was doing some smart optimization I tried to understand a bit more what was going on so I did a bunch of test. But my results are quite confusing and I would need some help to understand what's going on. I have a test page here: http://www.mx981.com/stuff/resize_bench/test.html jsPerf: http://jsperf.com/local-variable-due-to-the-scope-lookup To start the test, click the picture and the results will appear in the console. There are three different versions: The original code: for( var y = 0; y < heightScaled; y++ ) { for( var x = 0; x < widthScaled; x++ ) { var index = (Math.floor(y / scale) * img.width + Math.floor(x / scale)) * 4; var indexScaled = (y * widthScaled + x) * 4; scaledPixels.data[ indexScaled ] = origPixels.data[ index ]; scaledPixels.data[ indexScaled+1 ] = origPixels.data[ index+1 ]; scaledPixels.data[ indexScaled+2 ] = origPixels.data[ index+2 ]; scaledPixels.data[ indexScaled+3 ] = origPixels.data[ index+3 ]; } } jsPerf: http://jsperf.com/so-accessing-local-variable-doesn-t-improve-performance One of my attempt to optimize it: var ref_index = 0; var ref_indexScaled = 0 var ref_step = 1 / scale; for( var y = 0; y < heightScaled; y++ ) { for( var x = 0; x < widthScaled; x++ ) { var index = Math.floor(ref_index) * 4; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+1 ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+2 ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+3 ]; ref_index+= ref_step; } } jsPerf: http://jsperf.com/so-accessing-local-variable-doesn-t-improve-performance The same optimized code but with recalculating the index variable each time (Hybrid) var ref_index = 0; var ref_indexScaled = 0 var ref_step = 1 / scale; for( var y = 0; y < heightScaled; y++ ) { for( var x = 0; x < widthScaled; x++ ) { var index = (Math.floor(y / scale) * img.width + Math.floor(x / scale)) * 4; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+1 ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+2 ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+3 ]; ref_index+= ref_step; } } jsPerf: http://jsperf.com/so-accessing-local-variable-doesn-t-improve-performance The only difference in the two last one is the calculation of the 'index' variable. And to my surprise the optimized version is slower in most browsers (except opera). Results of personal testing (not the jsPerf tests): Opera Original: 8668ms Optimized: 932ms Hybrid: 8696ms Chrome Original: 139ms Optimized: 145ms Hybrid: 136ms Safari Original: 433ms Optimized: 853ms Hybrid: 451ms Firefox Original: 343ms Optimized: 422ms Hybrid: 350ms After digging around, it seems an usual good practice is to access mainly local variable due to the scope lookup. Because The optimized version only call one local variable it should be faster that the Hybrid code which call multiple variable and object in addition to the various operation involved. So why the "optimized" version is slower? I thought that it might be because some JavaScript engine don't optimize the Optimized version because it is not hot enough but after using --trace-opt in chrome, it seems all version are properly compiled by V8. At this point I am a bit clueless and wonder if somebody would know what is going on? I did also some more test cases in this page: http://www.mx981.com/stuff/resize_bench/index.html

    Read the article

  • SharePoint webpart for WebEx

    - by Kelly French
    Is there a SharePoint webpart available for WebEx? We do a lot of web conferencing and want the functionality to be exposed through SharePoint but WebEx hasn't released a webpart yet. The solution provided by WebEx has its critics. I searched for 'SharePoint' in Cisco's WebEx knowledgebase and got back zero (0) results. Has anyone found either a workaround or maybe a third-party webpart?

    Read the article

  • How to setup a static multicast ARP entry with Cisco SG300?

    - by Fredrik Hedberg
    We're running a Microsoft NLB cluster in multicast mode as a loadbalancer. Using our old Cisco IOS switches we propagate access to the cluster to our branches using a static ARP entry in the core router: arp 10.20.1.226 03bf.0a14.01e2 ARPA But how does one solve this using non-IOS based Cisco hardware such as the SG300 series? Adding a static ARP entry results in an error message telling the user that the hardware address needs to be a valid unicast MAC address.

    Read the article

  • Bash script to create mass series of directories

    - by Volomike
    I need to create a Bash script to go into every user's home folder, seek out a wp-content folder, create a directory uploads under it, and then chmod 0756 uploads. How do I achieve this? I imagine I need to use find with a regexp/regex, and then tell it to run another bash script on the results.

    Read the article

  • mail merge e-mail in Word 2007 with attachment

    - by kevyn
    Is there a simple way to mail merge with Word 2007 and add an attachment? I've searched google, and all results point to pasting in VB code. I want to a small team of novice users to be able to mail merge e-mails and add attachments. Does anyone know a simple way of doing this without code?

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >