Search Results

Search found 49670 results on 1987 pages for 'service method'.

Page 330/1987 | < Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >

  • How can I pass in a params of Expression<Func<T, object>> to a method?

    - by Pure.Krome
    Hi folks, I have the following two methods :- public static IQueryable<T> IncludeAssociations<T>(this IQueryable<T> source, params string[] associations) { ... } public static IQueryable<T> IncludeAssociations<T>(this IQueryable<T> source, params Expression<Func<T, object>>[] expressions) { ... } Now, when I try and pass in a params of Expression<Func<T, object>>[], it always calls the first method (the string[]' and of course, that value isNULL`) Eg. Expression<Func<Order, object>> x1 = x => x.User; Expression<Func<Order, object>> x2 = x => x.User.Passport; var foo = _orderRepo .Find() .IncludeAssociations(new {x1, x2} ) .ToList(); Can anyone see what I've done wrong? Why is it thinking my params are a string? Can I force the type, of the 2x variables?

    Read the article

  • Why notify listeners in a content provider query method?

    - by cbrulak
    Vegeolla has this blog post about content providers and the snippet below (at the bottom) with this line: cursor.setNotificationUri(getContext().getContentResolver(), uri); I'm curious as to why one would want to notify listeners about a query operation. Am I missing something? Thanks @Override public Cursor query(Uri uri, String[] projection, String selection, String[] selectionArgs, String sortOrder) { // Uisng SQLiteQueryBuilder instead of query() method SQLiteQueryBuilder queryBuilder = new SQLiteQueryBuilder(); // Check if the caller has requested a column which does not exists checkColumns(projection); // Set the table queryBuilder.setTables(TodoTable.TABLE_TODO); int uriType = sURIMatcher.match(uri); switch (uriType) { case TODOS: break; case TODO_ID: // Adding the ID to the original query queryBuilder.appendWhere(TodoTable.COLUMN_ID + "=" + uri.getLastPathSegment()); break; default: throw new IllegalArgumentException("Unknown URI: " + uri); } SQLiteDatabase db = database.getWritableDatabase(); Cursor cursor = queryBuilder.query(db, projection, selection, selectionArgs, null, null, sortOrder); // Make sure that potential listeners are getting notified cursor.setNotificationUri(getContext().getContentResolver(), uri); return cursor; }

    Read the article

  • Why am I getting this error when overriding an inherited method?

    - by Sergio Tapia
    Here's my parent class: public abstract class BaseFile { public string Name { get; set; } public string FileType { get; set; } public long Size { get; set; } public DateTime CreationDate { get; set; } public DateTime ModificationDate { get; set; } public abstract void GetFileInformation(); public abstract void GetThumbnail(); } And here's the class that's inheriting it: public class Picture:BaseFile { public override void GetFileInformation(string filePath) { FileInfo fileInformation = new FileInfo(filePath); if (fileInformation.Exists) { Name = fileInformation.Name; FileType = fileInformation.Extension; Size = fileInformation.Length; CreationDate = fileInformation.CreationTime; ModificationDate = fileInformation.LastWriteTime; } } public override void GetThumbnail() { } } I thought when a method was overridden, I could do what I wanted with it. Any help please? :)

    Read the article

  • How do I use an index in an array reference as a method reference in Perl?

    - by Robert P
    Similar to this question about iterating over subroutine references, and as a result of answering this question about a OO dispatch table, I was wondering how to call a method reference inside a reference, without removing it first, or if it was even possible. For example: package Class::Foo; use 5.012; #Yay autostrict! use warnings; # a basic constructor for illustration purposes.... sub new { my $class = shift; return bless {}, $class; } # some subroutines for flavor... sub sub1 { say 'in sub 1' } sub sub2 { say 'in sub 2' } sub sub3 { say 'in sub 3' } # and a way to dynamically load the tests we're running... sub sublist { my $self = shift; return [ $self->can('sub1'); $self->can('sub3'}; $self->can('sub2'); ]; } package main; my $instance = Class::Foo->new(a => 1, b => 2, c => 3); my $tests = $instance->sublist(); my $index = int(rand($#{$tests})); # <-- HERE So, at HERE, we could do: my $ref = $tests->{$index}; $instance->$ref(); but how would we do this, without removing the reference first?

    Read the article

  • How should I return different types in a method based on the value of a string in Java?

    - by Siracuse
    I'm new to Java and I have come to having the following problem: I have created several classes which all implement the interface "Parser". I have a JavaParser, PythonParser, CParser and finally a TextParser. I'm trying to write a method so it will take either a File or a String (representing a filename) and return the appropriate parser given the extension of the file. Here is some psuedo-code of what I'm basically attempting to do: public Parser getParser(String filename) { String extension = filename.substring(filename.lastIndexOf(".")); switch(extension) { case "py": return new PythonParser(); case "java": return new JavaParser(); case "c": return new CParser(); default: return new TextParser(); } } In general, is this the right way to handle this situation? Also, how should I handle the fact that Java doesn't allow switching on strings? Should I use the .hashcode() value of the strings? I feel like there is some design pattern or something for handling this but it eludes me. Is this how you would do it?

    Read the article

  • How do I alias the scala setter method 'myvar_$(myval)' to something more pleasing when in java?

    - by feydr
    I've been converting some code from java to scala lately trying to tech myself the language. Suppose we have this scala class: class Person() { var name:String = "joebob" } Now I want to access it from java so I can't use dot-notation like I would if I was in scala. So I can get my var's contents by issuing: person = Person.new(); System.out.println(person.name()); and set it via: person = Person.new(); person.name_$eq("sallysue"); System.out.println(person.name()); This holds true cause our Person Class looks like this in javap: Compiled from "Person.scala" public class Person extends java.lang.Object implements scala.ScalaObject{ public Person(); public void name_$eq(java.lang.String); public java.lang.String name(); public int $tag() throws java.rmi.RemoteException; } Yes, I could write my own getters/setters but I hate filling classes up with that and it doesn't make a ton of sense considering I already have them -- I just want to alias the _$eq method better. (This actually gets worse when you are dealing with stuff like antlr because then you have to escape it and it ends up looking like person.name_\$eq("newname"); Note: I'd much rather have to put up with this rather than fill my classes with more setter methods. So what would you do in this situation?

    Read the article

  • How to create a generic method in C# that's all applicable to many types - ints, strings, doubles et

    - by satyajit
    Let's I have a method to remove duplicates in an integer Array public int[] RemoveDuplicates(int[] elems) { HashSet<int> uniques = new HashSet<int>(); foreach (int item in elems) uniques.Add(item); elems = new int[uniques.Count]; int cnt = 0; foreach (var item in uniques) elems[cnt++] = item; return elems; } How can I make this generic such that now it accepts a string array and remove duplicates in it? How about a double array? I know I am probably mixing things here in between primitive and value types. For your reference the following code won't compile public List<T> RemoveDuplicates(List<T> elems) { HashSet<T> uniques = new HashSet<T>(); foreach (var item in elems) uniques.Add(item); elems = new List<T>(); int cnt = 0; foreach (var item in uniques) elems[cnt++] = item; return elems; } The reason is that all generic types should be closed at run time. Thanks for you comments

    Read the article

  • Why won't EF4 generate a method to support my Function Import?

    - by Deane
    I have a stored proc in my database which returns an integer. I added a Function Import to my model. This appears in the EDMX file: <Function Name="GetTotalEntityCount" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo" /> However, no method actually gets generated for this. It should be top level, right? using (MyContext context = new MyContext()) { context.MyMethodShouldBeRightHere(); } Nothing appears in Intellisense, I've gone through the designer.cs file and there's nothing in there, and reflected the DLL...nothing. The code generator is just not generating any code to support this stored proc. I added another table to my database and updated the model, and that came in, so the model will update, it's just specifically ignoring this stored proc. I've tried everything I can think of, and consulted every resource I can find, and as near as I can tell, I'm doing everything right. I'm using EF4, database-first. (I'm pretty sure on the version, anyway. This shows up in the generated file: Runtime Version:4.0.30319.1 )

    Read the article

  • How do I alias the scala setter method 'myvar_$eq(myval)' to something more pleasing when in java?

    - by feydr
    I've been converting some code from java to scala lately trying to teach myself the language. Suppose we have this scala class: class Person() { var name:String = "joebob" } Now I want to access it from java so I can't use dot-notation like I would if I was in scala. So I can get my var's contents by issuing: person = Person.new(); System.out.println(person.name()); and set it via: person = Person.new(); person.name_$eq("sallysue"); System.out.println(person.name()); This holds true cause our Person Class looks like this in javap: Compiled from "Person.scala" public class Person extends java.lang.Object implements scala.ScalaObject{ public Person(); public void name_$eq(java.lang.String); public java.lang.String name(); public int $tag() throws java.rmi.RemoteException; } Yes, I could write my own getters/setters but I hate filling classes up with that and it doesn't make a ton of sense considering I already have them -- I just want to alias the _$eq method better. (This actually gets worse when you are dealing with stuff like antlr because then you have to escape it and it ends up looking like person.name_\$eq("newname"); Note: I'd much rather have to put up with this rather than fill my classes with more setter methods. So what would you do in this situation?

    Read the article

  • When I try to pass large amounts of information using jquery $.ajax(post) method. it throws potenti

    - by dotnetrocks
    I am trying to create a preview window for my texteditor in my blog page. I need to send the content to the server to clean up the text entered before I can preview it on the preview window. I was trying to use $.ajax({ type: method, url: url, data: values, success: LoadPageCallback(targetID), error: function(msg) { $('#' + targetID).attr('innerHTML', 'An error has occurred. Please try again.'); } }); Whenever I tried to click on the preview button it returns an XMLHTTPRequest error. The error description - Description: Request Validation has detected a potentially dangerous client input value, and processing of the request has been aborted. This value may indicate an attempt to compromise the security of your application, such as a cross-site scripting attack. You can disable request validation by setting validateRequest=false in the Page directive or in the configuration section. However, it is strongly recommended that your application explicitly check all inputs in this case. The ValidateRequest for the page is set to false. Is there a way I can set validaterequest to false for the ajax call.Please advise Thank you for reading my post.

    Read the article

  • Have Microsoft changed how ASP.NET MVC deals with duplicate action method names?

    - by Jason Evans
    I might be missing something here, but in ASP.NET MVC 4, I can't get the following to work. Given the following controller: public class HomeController : Controller { public ActionResult Index() { return View(); } [HttpPost] public ActionResult Index(string order1, string order2) { return null; } } and it's view: @{ ViewBag.Title = "Home"; } @using (Html.BeginForm()) { @Html.TextBox("order1")<br /> @Html.TextBox("order2") <input type="submit" value="Save"/> } When start the app, all I get is this: The current request for action 'Index' on controller type 'HomeController' is ambiguous between the following action methods: System.Web.Mvc.ActionResult Index() on type ViewData.Controllers.HomeController System.Web.Mvc.ActionResult Index(System.String, System.String) on type ViewData.Controllers.HomeController Now, in ASP.NET MVC 3 the above works fine, I just tried it, so what's changed in ASP.NET MVC 4 to break this? OK there could be a chance that I'm doing something silly here, and not noticing it. EDIT: I notice that in the MVC 4 app, the Global.asax.cs file did not contain this: public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults ); } which the MVC 3 app does, by default. So I added the above to the MVC 4 app but it fails with the same error. Note that the MVC 3 app does work fine with the above route. I'm passing the "order" data via the Request.Form. EDIT: In the file RouteConfig.cs I can see RegisterRoutes is executed, with the following default route: routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional }); I still get the original error, regards ambiguity between which Index() method to call.

    Read the article

  • what use does the javascript for each method have (that map can't do)?

    - by JohnMerlino
    Hey all, The only difference I see in map and foreach is that map is returning an array and foreach is not. However, I don't even understand the last line of the foreach method "func.call(scope, this[i], i, this);". For example, isn't "this" and "scope" referring to same object and isn't this[i] and i referring to the current value in the loop? I noticed on another post someone said "Use forEach when you want to do something on the basis of each element of the list. You might be adding things to the page, for example. Essentially, it's great for when you want "side effects". I don't know what is meant by side effects. Array.prototype.map = function(fnc) { var a = new Array(this.length); for (var i = 0; i < this.length; i++) { a[i] = fnc(this[i]); } return a; } Array.prototype.forEach = function(func, scope) { scope = scope || this; for (var i = 0, l = this.length; i < l; i++) func.call(scope, this[i], i, this); } Finally, are there any real uses for these methods in javascript (since we aren't updating a database) other than to manipulate numbers like this: alert([1,2,3,4].map(function(x){ return x + 1})); //this is the only example I ever see of map in javascript. Thanks for any reply.

    Read the article

  • What is the purpose of the QAbstractButton::checkStateSet() method?

    - by darkadept
    I'm writing my own 4 state button and I'm not quite sure what to put in the checkStateSet() method, if anything. Here is what I've got so far: SyncDirectionButton::SyncDirectionButton(QWidget *parent) : QAbstractButton(parent) { setCheckable(true); setToolTip(tr("Click to change the sync direction")); _state = NoSync; } void SyncDirectionButton::paintEvent(QPaintEvent *e) { static QPixmapCache::Key noneKey; static QPixmapCache::Key bothKey; static QPixmapCache::Key leftKey; static QPixmapCache::Key rightKey; QPainter p(this); QPixmap pix; if (checkState() == SyncLeft) { if (!QPixmapCache::find(leftKey, &pix)) { pix.load(":/icons/sync-left.png"); leftKey = QPixmapCache::insert(pix); } } else if (checkState() == SyncBoth) { if (!QPixmapCache::find(rightKey, &pix)) { pix.load(":/icons/sync-right.png"); rightKey = QPixmapCache::insert(pix); } } else if (checkState() == SyncRight) { if (!QPixmapCache::find(bothKey, &pix)) { pix.load(":/icons/sync-both.png"); bothKey = QPixmapCache::insert(pix); } } else if (checkState() == NoSync) { if (!QPixmapCache::find(noneKey, &pix)) { pix.load(":/icons/application-exit.png"); noneKey = QPixmapCache::insert(pix); } } p.drawPixmap(0,0,pix); } SyncDirectionButton::DirectionState SyncDirectionButton::checkState() const { return _state; } void SyncDirectionButton::setCheckState(DirectionState state) { setChecked(state != NoSync); if (state != _state) { _state = state; } } QSize SyncDirectionButton::sizeHint() const { return QSize(180,90); } void SyncDirectionButton::checkStateSet() { } void SyncDirectionButton::nextCheckState() { setCheckState((DirectionState)((checkState()+1)%4)); }

    Read the article

  • How to call a generic method with an anonymous type involving generics?

    - by Alex Black
    I've got this code that works: def testTypeSpecialization = { class Foo[T] def add[T](obj: Foo[T]): Foo[T] = obj def addInt[X <% Foo[Int]](obj: X): X = { add(obj) obj } val foo = addInt(new Foo[Int] { def someMethod: String = "Hello world" }) assert(true) } But, I'd like to write it like this: def testTypeSpecialization = { class Foo[T] def add[X, T <% Foo[X](obj: T): T = obj val foo = add(new Foo[Int] { def someMethod: String = "Hello world" }) assert(true) } This second one fails to compile: no implicit argument matching parameter type (Foo[Int]{ ... }) = Foo[Nothing] was found. Basically: I'd like to create a new anonymous class/instance on the fly (e.g. new Foo[Int] { ... } ), and pass it into an "add" method which will add it to a list, and then return it The key thing here is that the variable from "val foo = " I'd like its type to be the anonymous class, not Foo[Int], since it adds methods (someMethod in this example) Any ideas? I think the 2nd one fails because the type Int is being erased. I can apparently 'hint' the compiler like this: def testTypeSpecialization = { class Foo[T] def add[X, T <% Foo[X]](dummy: X, obj: T): T = obj val foo = add(2, new Foo[Int] { def someMethod: String = "Hello world" }) assert(true) }

    Read the article

  • Using the XElement.Elements method, can I find elements with wildcard namespace but the same name?

    - by gav
    Hi All, Trying to do a simple parse of an XML document. What's the easiest way to pull out the two PropertyGroups below? <Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> 1 </PropertyGroup> <PropertyGroup xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> 2 </PropertyGroup> </Project> I have been trying to use XElement.Elements(XName) but to do so I need to prefix PropertyGroup with the xmlns. The issue is that I don't care about the name space and if it changes in future I would still like all PropertyGroups to be retrieved. var xml = XElement.Load(fileNameWithPath); var nameSpace = xml.GetDefaultNamespace(); var propertyGroups= xml.Elements(nameSpace + "PropertyGroup"); Can you improve on this code such that I don't need to prepend with nameSpace? I know I can essentially just reimplement the Elements method but I was hoping there was some way to pass a wildcard namespace? Thanks, Gavin

    Read the article

  • Implementing a non-public assignment operator with a public named method?

    - by Casey
    It is supposed to copy an AnimatedSprite. I'm having second thoughts that it has the unfortunate side effect of changing the *this object. How would I implement this feature without the side effect? EDIT: Based on new answers, the question should really be: How do I implement a non-public assignment operator with a public named method without side effects? (Changed title as such). public: AnimatedSprite& AnimatedSprite::Clone(const AnimatedSprite& animatedSprite) { return (*this = animatedSprite); } protected: AnimatedSprite& AnimatedSprite::operator=(const AnimatedSprite& rhs) { if(this == &rhs) return *this; destroy_bitmap(this->_frameImage); this->_frameImage = create_bitmap(rhs._frameImage->w, rhs._frameImage->h); clear_bitmap(this->_frameImage); this->_frameDimensions = rhs._frameDimensions; this->CalcCenterFrame(); this->_frameRate = rhs._frameRate; if(rhs._animation != nullptr) { delete this->_animation; this->_animation = new a2de::AnimationHandler(*rhs._animation); } else { delete this->_animation; this->_animation = nullptr; } return *this; }

    Read the article

  • Method returns an IDisposable - Should I dispose of the result, even if it's not assigned to anythin

    - by mjd79
    This seems like a fairly straightforward question, but I couldn't find this particular use-case after some searching around. Suppose I have a simple method that, say, determines if a file is opened by some process. I can do this (not 100% correctly, but fairly well) with this: public bool IsOpen(string fileName) { try { File.Open(fileName, FileMode.Open, FileAccess.Read, FileShare.None); } catch { // if an exception is thrown, the file must be opened by some other process return true; } } (obviously this isn't the best or even correct way to determine this - File.Open throws a number of different exceptions, all with different meanings, but it works for this example) Now the File.Open call returns a FileStream, and FileStream implements IDisposable. Normally we'd want to wrap the usage of any FileStream instantiations in a using block to make sure they're disposed of properly. But what happens in the case where we don't actually assign the return value to anything? Is it still necessary to dispose of the FileStream, like so: try { using (File.Open(fileName, FileMode.Open, FileAccess.Read, FileShare.None)); { /* nop */ } } catch { return true; } Should I create a FileStream instance and dispose of that? try { using (FileStream fs = File.Open(fileName, FileMode.Open, FileAccess.Read, FileShare.None)); } ... Or are these totally unnecessary? Can we simply call File.Open and not assign it to anything (first code example), and let the GC dispose of it right away?

    Read the article

  • How to call a method in another class using the arraylist index in java?

    - by Puchatek
    Currently I have two classes. A Classroom class and a School class. public void addTeacherToClassRoom(Classroom myClassRoom, String TeacherName) I would like my method addTeacherToClassRoom to use the Classroom Arraylist index number to setTeacherName e.g. int 0 = maths int 1 = science I would like to setTeacherName "Daniel" in int 1 science. many, thanks public class Classroom { private String classRoomName; private String teacherName; public void setClassRoomName(String newClassRoomName) { classRoomName = newClassRoomName; } public String returnClassRoomName() { return classRoomName; } public void setTeacherName(String newTeacherName) { teacherName = newTeacherName; } public String returnTeacherName() { return teacherName; } } import java.util.ArrayList; public class School { private ArrayList<Classroom> classrooms; private String classRoomName; private String teacherName; public School() { classrooms = new ArrayList<Classroom>(); } public void addClassRoom(Classroom newClassRoom, String theClassRoomName) { classrooms.add(newClassRoom); classRoomName = theClassRoomName; } public void addTeacherToClassRoom(Classroom myClassRoom, String TeacherName) { myClassRoom.setTeacherName(TeacherName); } }

    Read the article

  • Objective-C Result from a Static Method saved to class instance variable giving "EXC_BAD_ACCESS" when used.

    - by KinGBin
    I am trying to store the md5 string as a class instance variable instead of the actual password. I have a static function that will return a md5 string which I'm trying to store in an instance variable instead of the actual password. I have the following setter for my class instance variable: -(void)setPassword:(NSString *)newpass{ if(newpass != password){ password = [utils md5HexDigest:newpass]; } } This will pass back the correct md5 string and save it to the password variable in my init function: [self setPassword:pword];. If I call another instance method and try to access self.password" I will get "EXC_BAD_ACCESS". I understand that the memory is getting released, but I have no clue to make sure it stays. I have tried alloc init with autorelease with no luck. This is the md5HexDigest function getting called during the init (graciously found in another stackoverflow question): + (NSString*)md5HexDigest:(NSString*)input { const char* str = [input UTF8String]; unsigned char result[CC_MD5_DIGEST_LENGTH]; CC_MD5(str, strlen(str), result); NSMutableString *ret = [NSMutableString stringWithCapacity:CC_MD5_DIGEST_LENGTH*2]; for(int i = 0; i<CC_MD5_DIGEST_LENGTH; i++) { [ret appendFormat:@"%02x",result[i]]; } return ret; } Any help/pointers would be greatly appreciated. I would rather have the md5 string saved in memory than the actual password calling the md5 every time I needed to use the password. Thanks in advance.

    Read the article

  • When calling a static method on parent class, can the parent class deduce the type on the child (C#)

    - by Matt
    Suppose we have 2 classes, Child, and the class from which it inherits, Parent. class Parent { public static void MyFunction(){} } class Child : Parent { } Is it possible to determine in the parent class how the method was called? Because we can call it two ways: Parent.MyFunction(); Child.MyFunction(); My current approach was trying to use: MethodInfo.GetCurrentMethod().ReflectedType; // and MethodInfo.GetCurrentMethod().DeclaringType; But both appear to return the Parent type. If you are wondering what, exactly I am trying to accomplish (and why I am violating the basic OOP rule that the parent shouldn't have to know anything about the child), the short of it is this (let me know if you want the long version): I have a Model structure representing some of our data that persists to the database. All of these models inherit from an abstract Parent. This parent implements a couple of events, such as SaveEvent, DeleteEvent, etc. We want to be able to subscribe to events specific to the type. So, even though the event is in the parent, I want to be able to do: Child.SaveEvent += new EventHandler((sender, args) => {}); I have everything in place, where the event is actually backed by a dictionary of event handlers, hashed by type. The last thing I need to get working is correctly detecting the Child type, when doing Child.SaveEvent. I know I can implement the event in each child class (even forcing it through use of abstract), but it would be nice to keep it all in the parent, which is the class actually firing the events (since it implements the common save/delete/change functionality).

    Read the article

  • LLVM/Clang bug found in convenience method and NSClassFromString(...) alloc/release

    - by pirags
    I am analyzing Objective-C iPhone project with LLVM/Clang static analyzer. I keep getting two reported bugs, but I am pretty sure that the code is correct. 1) Convenience method. + (UILabel *)simpleLabel { UILabel *label = [[UILabel alloc] initWithFrame:CGRectMake(100, 10, 200, 25)]; label.adjustsFontSizeToFitWidth = YES; [label autorelease]; // Object with +0 retain counts returned to caller where a +1 (owning) retain count is expected. return label; } 2) The [NSClassFromString(...) alloc] returns retainCount + 1. Am I right? Class detailsViewControllerClass = NSClassFromString(self.detailsViewControllerName); UIViewController *detailsViewController = [[detailsViewControllerClass alloc] performSelector:@selector(initWithAdditive:) withObject:additive]; [self.parentController.navigationController pushViewController:detailsViewController animated:YES]; [detailsViewController release]; // Incorrect decrement of the reference count of an object is not owned... Are these some Clang issues or I am totally mistaken in these both cases?

    Read the article

  • TypeError: Error #1009: Cannot access a property or method of a null object reference. at RECOVER_fyp1_fla::MainTimeline/abc1()

    - by user2643323
    TypeError: Error #1009: Cannot access a property or method of a null object reference. at RECOVER_fyp1_fla::MainTimeline/abc1() Hi, what does this mean? Can anybody figure it out? Thanks. Code: swatter.addEventListener(Event.ENTER_FRAME,abc1); Mouse.hide(); function abc1(e:Event) { swatter.x = mouseX; swatter.y = mouseY; enter code here mosq1.y = mosq1.y + 2; mosq2.y = mosq2.y + 3; mosq3.y = mosq3.y + 4; mosq4.y = mosq4.y + 5; mosq5.y = mosq5.y + 6; if (mosq1.y > 640) { mosq1.y = -50; } if (mosq2.y > 640) { mosq2.y = -50; } if (mosq3.y > 640) { mosq3.y = -50; } if (mosq4.y > 640) { mosq4.y = -50; } if (mosq5.y > 640) { mosq5.y = -50; } if(swatter.hitTestObject(mosq1)) { //SoundMixer.stopAll(); //three_start_sound1.play(); mosq1.parent.removeChild(mosq1); } if(swatter.hitTestObject(mosq2)) { //SoundMixer.stopAll(); //three_start_sound1.play(); mosq2.parent.removeChild(mosq2); } if(swatter.hitTestObject(mosq3)) { //SoundMixer.stopAll(); //three_start_sound1.play(); mosq3.parent.removeChild(mosq3); } if(swatter.hitTestObject(mosq4)) { //SoundMixer.stopAll(); //three_start_sound1.play(); mosq4.parent.removeChild(mosq4); } if(swatter.hitTestObject(mosq5)) { //SoundMixer.stopAll(); //three_start_sound1.play(); mosq5.parent.removeChild(mosq5); } } enter code here

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • "is not abstact and does not override abstract method."

    - by Chris Bolton
    So I'm pretty new to android development and have been trying to piece together some code bits. Here's what I have so far: package com.teslaprime.prirt; import android.app.Activity; import android.os.Bundle; import android.view.View; import android.widget.ArrayAdapter; import android.widget.Button; import android.widget.EditText; import android.widget.ListView; import android.widget.AdapterView.OnItemClickListener; import java.util.ArrayList; import java.util.List; public class TaskList extends Activity { List<Task> model = new ArrayList<Task>(); ArrayAdapter<Task> adapter = null; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Button add = (Button) findViewById(R.id.add); add.setOnClickListener(onAdd); ListView list = (ListView) findViewById(R.id.tasks); adapter = new ArrayAdapter<Task>(this,android.R.layout.simple_list_item_1,model); list.setAdapter(adapter); list.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(View v, int position, long id) { adapter.remove(position); } });} private View.OnClickListener onAdd = new View.OnClickListener() { public void onClick(View v) { Task task = new Task(); EditText name = (EditText) findViewById(R.id.taskEntry); task.name = name.getText().toString(); adapter.add(task); } }; } and here are the errors I'm getting: compile: [javac] /opt/android-sdk/tools/ant/main_rules.xml:384: warning: 'includeantruntime' was not set, defaulting to build.sysclasspath=last; set to false for repeatable builds [javac] Compiling 2 source files to /home/chris-kun/code/priRT/bin/classes [javac] /home/chris-kun/code/priRT/src/com/teslaprime/prirt/TaskList.java:30: <anonymous com.teslaprime.prirt.TaskList$1> is not abstract and does not override abstract method onItemClick(android.widget.AdapterView<?>,android.view.View,int,long) in android.widget.AdapterView.OnItemClickListener [javac] list.setOnItemClickListener(new OnItemClickListener() { [javac] ^ [javac] /home/chris-kun/code/priRT/src/com/teslaprime/prirt/TaskList.java:32: remove(com.teslaprime.prirt.Task) in android.widget.ArrayAdapter<com.teslaprime.prirt.Task> cannot be applied to (int) [javac] adapter.remove(position); [javac] ^ [javac] 2 errors BUILD FAILED /opt/android-sdk/tools/ant/main_rules.xml:384: Compile failed; see the compiler error output for details. Total time: 2 seconds any ideas?

    Read the article

  • using '.each' method: how do I get the indexes of multiple ordered lists to each begin at [0]?

    - by shecky
    I've got multiple divs, each with an ordered list (various lengths). I'm using jquery to add a class to each list item according to its index (for the purpose of columnizing portions of each list). What I have so far ... <script type="text/javascript"> /* Objective: columnize list items from a single ul or ol in a pre-determined number of columns 1. get the index of each list item 2. assign column class according to li's index */ $(document).ready(function() { $('ol li').each(function(index){ // assign class according to li's index ... index = li number -1: 1-6 = 0-5; 7-12 = 6-11, etc. if ( index <= 5 ) { $(this).addClass('column-1'); } if ( index > 5 && index < 12 ) { $(this).addClass('column-2'); } if ( index > 11 ) { $(this).addClass('column-3'); } // add another class to the first list item in each column $('ol li').filter(function(index) { return index != 0 && index % 6 == 0; }).addClass('reset'); }); // closes li .each func }); // closes doc.ready.func </script> ... succeeds if there's only one list; when there are additional lists, the last column class ('column-3') is added to all remaining list items on the page. In other words, the script is presently indexing continuously through all subsequent lists/list items, rather than being re-set to [0] for each ordered list. Can someone please show me the proper method/syntax to correct/amend this, so that the script addresses/indexes each ordered list anew? many thanks in advance. shecky p.s. the markup is pretty straight-up: <div class="tertiary"> <h1>header</h1> <ol> <li><a href="#" title="a link">a link</a></li> <li><a href="#" title="a link">a link</a></li> <li><a href="#" title="a link">a link</a></li> </ol> </div><!-- END div class="tertiary" -->

    Read the article

< Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >