Search Results

Search found 32512 results on 1301 pages for 'object oriented analysis'.

Page 55/1301 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • IXRepository and test problems

    - by Ridermansb
    Recently had a doubt about how and where to test repository methods. Let the following situation: I have an interface IRepository like this: public interface IRepository<T> where T: class, IEntity { IQueryable<T> Query(Expression<Func<T, bool>> expression); // ... Omitted } And a generic implementation of IRepository public class Repository<T> : IRepository<T> where T : class, IEntity { public IQueryable<T> Query(Expression<Func<T, bool>> expression) { return All().Where(expression).AsQueryable(); } } This is an implementation base that can be used by any repository. It contains the basic implementation of my ORM. Some repositories have specific filters, in which case we will IEmployeeRepository with a specific filter: public interface IEmployeeRepository : IRepository<Employee> { IQueryable<Employee> GetInactiveEmployees(); } And the implementation of IEmployeeRepository: public class EmployeeRepository : Repository<Employee>, IEmployeeRepository // TODO: I have a dependency with ORM at this point in Repository<Employee>. How to solve? How to test the GetInactiveEmployees method { public IQueryable<Employee> GetInactiveEmployees() { return Query(p => p.Status != StatusEmployeeEnum.Active || p.StartDate < DateTime.Now); } } Questions Is right to inherit Repository<Employee>? The goal is to reuse code once all implementing IRepository already been made. If EmployeeRepository inherit only IEmployeeRepository, I have to literally copy and paste the code of Repository<T>. In our example, in EmployeeRepository : Repository<Employee> our Repository lies in our ORM layer. We have a dependency here with our ORM impossible to perform some unit test. How to create a unit test to ensure that the filter GetInactiveEmployees return all Employees in which the Status != Active and StartDate < DateTime.Now. I can not create a Fake/Mock of IEmployeeRepository because I would be testing? Need to test the actual implementation of GetInactiveEmployees. The complete code can be found on Github

    Read the article

  • Is "If a method is re-used without changes, put the method in a base class, else create an interface" a good rule-of-thumb?

    - by exizt
    A colleague of mine came up with a rule-of-thumb for choosing between creating a base class or an interface. He says: Imagine every new method that you are about to implement. For each of them, consider this: will this method be implemented by more than one class in exactly this form, without any change? If the answer is "yes", create a base class. In every other situation, create an interface. For example: Consider the classes cat and dog, which extend the class mammal and have a single method pet(). We then add the class alligator, which doesn't extend anything and has a single method slither(). Now, we want to add an eat() method to all of them. If the implementation of eat() method will be exactly the same for cat, dog and alligator, we should create a base class (let's say, animal), which implements this method. However, if it's implementation in alligator differs in the slightest way, we should create an IEat interface and make mammal and alligator implement it. He insists that this method covers all cases, but it seems like over-simplification to me. Is it worth following this rule-of-thumb?

    Read the article

  • Android app, No error message but has stopped unexpectedly [migrated]

    - by user74722
    Does anyone know what is my problem. I do not have any compile error messages however when i run the app it crashes and stops unexpectedly. Here is my codes. Thank you in advance. ListView l ; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_final_project); String arr[]={"Red","Green","Blue","Yellow","Cyan"}; l=(ListView) findViewById(R.id.listView1); ArrayAdapter<String> adapter=new ArrayAdapter<String>(getApplicationContext(), android.R.layout.simple_list_item_1, arr); l.setAdapter(adapter); Button buttonOne = (Button) findViewById(R.id.button1); buttonOne.setOnClickListener(new OnClickListener(){ @Override public void onClick(View v){ setContentView(R.layout.layout_save); // setContentView(R.layout.activity_final_project); //startActivity(new Intent("com.example.finalproject.layout_save")); } }); }

    Read the article

  • Which order to define getters and setters in? [closed]

    - by N.N.
    Is there a best practice for the order to define getters and setters in? There seems to be two practices: getter/setter pairs first getters, then setters (or the other way around) To illuminate the difference here is a Java example of getter/setter pairs: public class Foo { private int var1, var2, var3; public int getVar1() { return var1; } public void setVar1(int var1) { this.var1 = var1; } public int getVar2() { return var2; } public void setVar2(int var2) { this.var2 = var2; } public int getVar3() { return var3; } public void setVar3(int var3) { this.var3 = var3; } } And here is a Java example of first getters, then setters: public class Foo { private int var1, var2, var3; public int getVar1() { return var1; } public int getVar2() { return var2; } public int getVar3() { return var3; } public void setVar1(int var1) { this.var1 = var1; } public void setVar2(int var2) { this.var2 = var2; } public void setVar3(int var3) { this.var3 = var3; } } I think the latter type of ordering is clearer both in code and in class diagrams but I do not know if that is enough to rule out the other type of ordering.

    Read the article

  • .NET 4.0 Dynamic object used statically?

    - by Kevin Won
    I've gotten quite sick of XML configuration files in .NET and want to replace them with a format that is more sane. Therefore, I'm writing a config file parser for C# applications that will take a custom config file format, parse it, and create a Python source string that I can then execute in C# and use as a static object (yes that's right--I want a static (not the static type dyanamic) object in the end). Here's an example of what my config file looks like: // my custom config file format GlobalName: ExampleApp Properties { ExternalServiceTimeout: "120" } Python { // this allows for straight python code to be added to handle custom config def MyCustomPython: return "cool" } Using ANTLR I've created a Lexer/Parser that will convert this format to a Python script. So assume I have that all right and can take the .config above and run my Lexer/Parser on it to get a Python script out the back (this has the added benefit of giving me a validation tool for my config). By running the resultant script in C# // simplified example of getting the dynamic python object in C# // (not how I really do it) ScriptRuntime py = Python.CreateRuntime(); dynamic conf = py.UseFile("conftest.py"); dynamic t = conf.GetConfTest("test"); I can get a dynamic object that has my configuration settings. I can now get my config file settings in C# by invoking a dynamic method on that object: //C# calling a method on the dynamic python object var timeout = t.GetProperty("ExternalServiceTimeout"); //the config also allows for straight Python scripting (via the Python block) var special = t.MyCustonPython(); of course, I have no type safety here and no intellisense support. I have a dynamic representation of my config file, but I want a static one. I know what my Python object's type is--it is actually newing up in instance of a C# class. But since it's happening in python, it's type is not the C# type, but dynamic instead. What I want to do is then cast the object back to the C# type that I know the object is: // doesn't work--can't cast a dynamic to a static type (nulls out) IConfigSettings staticTypeConfig = t as IConfigSettings Is there any way to figure out how to cast the object to the static type? I'm rather doubtful that there is... so doubtful that I took another approach of which I'm not entirely sure about. I'm wondering if someone has a better way... So here's my current tactic: since I know the type of the python object, I am creating a C# wrapper class: public class ConfigSettings : IConfigSettings that takes in a dynamic object in the ctor: public ConfigSettings(dynamic settings) { this.DynamicProxy = settings; } public dynamic DynamicProxy { get; private set; } Now I have a reference to the Python dynamic object of which I know the type. So I can then just put wrappers around the Python methods that I know are there: // wrapper access to the underlying dynamic object // this makes my dynamic object appear 'static' public string GetSetting(string key) { return this.DynamicProxy.GetProperty(key).ToString(); } Now the dynamic object is accessed through this static proxy and thus can obviously be passed around in the static C# world via interface, etc: // dependency inject the dynamic object around IBusinessLogic logic = new BusinessLogic(IConfigSettings config); This solution has the benefits of all the static typing stuff we know and love while at the same time giving me the option of 'bailing out' to dynamic too: // the DynamicProxy property give direct access to the dynamic object var result = config.DynamicProxy.MyCustomPython(); but, man, this seems rather convoluted way of getting to an object that is a static type in the first place! Since the whole dynamic/static interaction world is new to me, I'm really questioning if my solution is optimal or if I'm missing something (i.e. some way of casting that dynamic object to a known static type) about how to bridge the chasm between these two universes.

    Read the article

  • How to Correct & Improve the Design of this Code?

    - by DaveDev
    HI Guys, I've been working on a little experiement to see if I could create a helper method to serialize any of my types to any type of HTML tag I specify. I'm getting a NullReferenceException when _writer = _viewContext.Writer; is called in protected virtual void Dispose(bool disposing) {/*...*/} I think I'm at a point where it almost works (I've gotten other implementations to work) and I was wondering if somebody could point out what I'm doing wrong? Also, I'd be interested in hearing suggestions on how I could improve the design? So basically, I have this code that will generate a Select box with a number of options: // the idea is I can use one method to create any complete tag of any type // and put whatever I want in the content area <% using (Html.GenerateTag<SelectTag>(Model, new { href = Url.Action("ActionName") })) { %> <%foreach (var fund in Model.Funds) {%> <% using (Html.GenerateTag<OptionTag>(fund)) { %> <%= fund.Name %> <% } %> <% } %> <% } %> This Html.GenerateTag helper is defined as: public static MMTag GenerateTag<T>(this HtmlHelper htmlHelper, object elementData, object attributes) where T : MMTag { return (T)Activator.CreateInstance(typeof(T), htmlHelper.ViewContext, elementData, attributes); } Depending on the type of T it'll create one of the types defined below, public class HtmlTypeBase : MMTag { public HtmlTypeBase() { } public HtmlTypeBase(ViewContext viewContext, params object[] elementData) { base._viewContext = viewContext; base.MergeDataToTag(viewContext, elementData); } } public class SelectTag : HtmlTypeBase { public SelectTag(ViewContext viewContext, params object[] elementData) { base._tag = new TagBuilder("select"); //base.MergeDataToTag(viewContext, elementData); } } public class OptionTag : HtmlTypeBase { public OptionTag(ViewContext viewContext, params object[] elementData) { base._tag = new TagBuilder("option"); //base.MergeDataToTag(viewContext, _elementData); } } public class AnchorTag : HtmlTypeBase { public AnchorTag(ViewContext viewContext, params object[] elementData) { base._tag = new TagBuilder("a"); //base.MergeDataToTag(viewContext, elementData); } } all of these types (anchor, select, option) inherit from HtmlTypeBase, which is intended to perform base.MergeDataToTag(viewContext, elementData);. This doesn't happen though. It works if I uncomment the MergeDataToTag methods in the derived classes, but I don't want to repeat that same code for every derived class I create. This is the definition for MMTag: public class MMTag : IDisposable { internal bool _disposed; internal ViewContext _viewContext; internal TextWriter _writer; internal TagBuilder _tag; internal object[] _elementData; public MMTag() {} public MMTag(ViewContext viewContext, params object[] elementData) { } public void Dispose() { Dispose(true /* disposing */); GC.SuppressFinalize(this); } protected virtual void Dispose(bool disposing) { if (!_disposed) { _disposed = true; _writer = _viewContext.Writer; _writer.Write(_tag.ToString(TagRenderMode.EndTag)); } } protected void MergeDataToTag(ViewContext viewContext, object[] elementData) { Type elementDataType = elementData[0].GetType(); foreach (PropertyInfo prop in elementDataType.GetProperties()) { if (prop.PropertyType.IsPrimitive || prop.PropertyType == typeof(Decimal) || prop.PropertyType == typeof(String)) { object propValue = prop.GetValue(elementData[0], null); string stringValue = propValue != null ? propValue.ToString() : String.Empty; _tag.Attributes.Add(prop.Name, stringValue); } } var dic = new Dictionary<string, object>(StringComparer.OrdinalIgnoreCase); var attributes = elementData[1]; if (attributes != null) { foreach (PropertyDescriptor descriptor in TypeDescriptor.GetProperties(attributes)) { object value = descriptor.GetValue(attributes); dic.Add(descriptor.Name, value); } } _tag.MergeAttributes<string, object>(dic); _viewContext = viewContext; _viewContext.Writer.Write(_tag.ToString(TagRenderMode.StartTag)); } } Thanks Dave

    Read the article

  • Where is meta.local_fields set in django.db.models.base.py ?

    - by BryanWheelock
    I'm getting the error: Exception Value: (1110, "Column 'about' specified twice") As I was reviewing the Django error page, I noticed that the customizations the model User, seem to be appended to the List twice. This seems to be happening here in django/db/model/base.py in base_save(): values = [(f, f.get_db_prep_save(raw and getattr(self, f.attname) or f.pre_save(self, True))) for f in meta.local_fields] this is what Django error page shows values to be: values = [(<django.db.models.fields.CharField object at 0xa78996c>, u'kallie'), (<django.db.models.fields.CharField object at 0xa7899cc>, ''), (<django.db.models.fields.CharField object at 0xa789a2c>, ''), (<django.db.models.fields.EmailField object at 0xa789a8c>, u'[email protected]'), (<django.db.models.fields.CharField object at 0xa789b2c>, 'sha1$d4a80$0e5xxxxxxxxxxxxxxxxxxxxddadfb07'), (<django.db.models.fields.BooleanField object at 0xa789bcc>, False), (<django.db.models.fields.BooleanField object at 0xa789c6c>, True), (<django.db.models.fields.BooleanField object at 0xa789d2c>, False), (<django.db.models.fields.DateTimeField object at 0xa789dcc>, u'2010-02-03 14:54:35'), (<django.db.models.fields.DateTimeField object at 0xa789e2c>, u'2010-02-03 14:54:35'), # this is where the values from the User model customizations show up (<django.db.models.fields.BooleanField object at 0xa8c69ac>, False), (<django.db.models.fields.CharField object at 0xa8c688c>, None), (<django.db.models.fields.PositiveIntegerField object at 0xa8c69cc>, 1), (<django.db.models.fields.CharField object at 0xa8c69ec>, 'b5ab1603b2308xxxxxxxxxxx75bca1'), (<django.db.models.fields.SmallIntegerField object at 0xa8c6dac>, 0), (<django.db.models.fields.SmallIntegerField object at 0xa8c6e4c>, 0), (<django.db.models.fields.SmallIntegerField object at 0xa8c6e8c>, 0), (<django.db.models.fields.SmallIntegerField object at 0xa8c6ecc>, 10), (<django.db.models.fields.DateTimeField object at 0xa8c6eec>, u'2010-02-03 14:54:35'), (<django.db.models.fields.CharField object at 0xa8c6f2c>, ''), (<django.db.models.fields.URLField object at 0xa8c6f6c>, ''), (<django.db.models.fields.CharField object at 0xa8c6fac>, ''), (<django.db.models.fields.DateField object at 0xa8c6fec>, None), (<django.db.models.fields.TextField object at 0xa8cb04c>, ''), # at this point User model customizations repeats itself (<django.db.models.fields.BooleanField object at 0xa663b0c>, False), (<django.db.models.fields.CharField object at 0xaa1e94c>, None), (<django.db.models.fields.PositiveIntegerField object at 0xaa1e34c>, 1), (<django.db.models.fields.CharField object at 0xaa1e40c>, 'b5ab1603b2308050ebd62f49ca75bca1'), (<django.db.models.fields.SmallIntegerField object at 0xa8c6d8c>, 0), (<django.db.models.fields.SmallIntegerField object at 0xaa2378c>, 0), (<django.db.models.fields.SmallIntegerField object at 0xaa237ac>, 0), (<django.db.models.fields.SmallIntegerField object at 0xaa237ec>, 10), (<django.db.models.fields.DateTimeField object at 0xaa2380c>, u'2010-02-03 14:54:35'), (<django.db.models.fields.CharField object at 0xaa2384c>, ''), (<django.db.models.fields.URLField object at 0xaa2388c>, ''), (<django.db.models.fields.CharField object at 0xaa238cc>, ''), (<django.db.models.fields.DateField object at 0xaa2390c>, None), (<django.db.models.fields.TextField object at 0xaa2394c>, '')] Since this app is in Production, I can't figure out how to use pdb.set_trace() to see what's going on inside of save_base. The customizations to User are: User.add_to_class('email_isvalid', models.BooleanField(default=False)) User.add_to_class('email_key', models.CharField(max_length=16, null=True)) User.add_to_class('reputation', models.PositiveIntegerField(default=1)) User.add_to_class('gravatar', models.CharField(max_length=32)) User.add_to_class('email_feeds', generic.GenericRelation(EmailFeed)) User.add_to_class('favorite_questions', models.ManyToManyField(Question, through=FavoriteQuestion, related_name='favorited_by')) User.add_to_class('badges', models.ManyToManyField(Badge, through=Award, related_name='awarded_to')) User.add_to_class('gold', models.SmallIntegerField(default=0)) User.add_to_class('silver', models.SmallIntegerField(default=0)) User.add_to_class('bronze', models.SmallIntegerField(default=0)) User.add_to_class('questions_per_page', models.SmallIntegerField(choices=QUESTIONS_PER_PAGE_CHOICES, default=10)) User.add_to_class('last_seen', models.DateTimeField(default=datetime.datetime.now)) User.add_to_class('real_name', models.CharField(max_length=100, blank=True)) User.add_to_class('website', models.URLField(max_length=200, blank=True)) User.add_to_class('location', models.CharField(max_length=100, blank=True)) User.add_to_class('date_of_birth', models.DateField(null=True, blank=True)) User.add_to_class('about', models.TextField(blank=True)) Django1.1.1 Python 2.5

    Read the article

  • Object oriented n-tier design. Am I abstracting too much? Or not enough?

    - by max
    Hi guys, I'm building my first enterprise grade solution (at least I'm attempting to make it enterprise grade). I'm trying to follow best practice design patterns but am starting to worry that I might be going too far with abstraction. I'm trying to build my asp.net webforms (in C#) app as an n-tier application. I've created a Data Access Layer using an XSD strongly-typed dataset that interfaces with a SQL server backend. I access the DAL through some Business Layer Objects that I've created on a 1:1 basis to the datatables in the dataset (eg, a UsersBLL class for the Users datatable in the dataset). I'm doing checks inside the BLL to make sure that data passed to DAL is following the business rules of the application. That's all well and good. Where I'm getting stuck though is the point at which I connect the BLL to the presentation layer. For example, my UsersBLL class deals mostly with whole datatables, as it's interfacing with the DAL. Should I now create a separate "User" (Singular) class that maps out the properties of a single user, rather than multiple users? This way I don't have to do any searching through datatables in the presentation layer, as I could use the properties created in the User class. Or would it be better to somehow try to handle this inside the UsersBLL? Sorry if this sounds a little complicated... Below is the code from the UsersBLL: using System; using System.Data; using PedChallenge.DAL.PedDataSetTableAdapters; [System.ComponentModel.DataObject] public class UsersBLL { private UsersTableAdapter _UsersAdapter = null; protected UsersTableAdapter Adapter { get { if (_UsersAdapter == null) _UsersAdapter = new UsersTableAdapter(); return _UsersAdapter; } } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Select, true)] public PedChallenge.DAL.PedDataSet.UsersDataTable GetUsers() { return Adapter.GetUsers(); } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Select, false)] public PedChallenge.DAL.PedDataSet.UsersDataTable GetUserByUserID(int userID) { return Adapter.GetUserByUserID(userID); } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Select, false)] public PedChallenge.DAL.PedDataSet.UsersDataTable GetUsersByTeamID(int teamID) { return Adapter.GetUsersByTeamID(teamID); } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Select, false)] public PedChallenge.DAL.PedDataSet.UsersDataTable GetUsersByEmail(string Email) { return Adapter.GetUserByEmail(Email); } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Insert, true)] public bool AddUser(int? teamID, string FirstName, string LastName, string Email, string Role, int LocationID) { // Create a new UsersRow instance PedChallenge.DAL.PedDataSet.UsersDataTable Users = new PedChallenge.DAL.PedDataSet.UsersDataTable(); PedChallenge.DAL.PedDataSet.UsersRow user = Users.NewUsersRow(); if (UserExists(Users, Email) == true) return false; if (teamID == null) user.SetTeamIDNull(); else user.TeamID = teamID.Value; user.FirstName = FirstName; user.LastName = LastName; user.Email = Email; user.Role = Role; user.LocationID = LocationID; // Add the new user Users.AddUsersRow(user); int rowsAffected = Adapter.Update(Users); // Return true if precisely one row was inserted, // otherwise false return rowsAffected == 1; } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Update, true)] public bool UpdateUser(int userID, int? teamID, string FirstName, string LastName, string Email, string Role, int LocationID) { PedChallenge.DAL.PedDataSet.UsersDataTable Users = Adapter.GetUserByUserID(userID); if (Users.Count == 0) // no matching record found, return false return false; PedChallenge.DAL.PedDataSet.UsersRow user = Users[0]; if (teamID == null) user.SetTeamIDNull(); else user.TeamID = teamID.Value; user.FirstName = FirstName; user.LastName = LastName; user.Email = Email; user.Role = Role; user.LocationID = LocationID; // Update the product record int rowsAffected = Adapter.Update(user); // Return true if precisely one row was updated, // otherwise false return rowsAffected == 1; } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Delete, true)] public bool DeleteUser(int userID) { int rowsAffected = Adapter.Delete(userID); // Return true if precisely one row was deleted, // otherwise false return rowsAffected == 1; } private bool UserExists(PedChallenge.DAL.PedDataSet.UsersDataTable users, string email) { // Check if user email already exists foreach (PedChallenge.DAL.PedDataSet.UsersRow userRow in users) { if (userRow.Email == email) return true; } return false; } } Some guidance in the right direction would be greatly appreciated!! Thanks all! Max

    Read the article

  • How to design Models the correct way: Object-oriented or "Package"-oriented?

    - by ajsie
    I know that in OOP you want every object (from a class) to be a "thing", eg. user, validator etc. I know the basics about MVC, how they different parts interact with each other. However, i wonder if the models in MVC should be designed according to the traditional OOP design, that is to say, should every model be a database/table/row (solution 2)? Or is the intention more like to collect methods that are affecting the same table or a bunch of related tables (solution 1). example for an Address book module in CodeIgniter, where i want be able to "CRUD" a Contact and add/remove it to/from a CRUD-able Contact Group. Models solution 1: bunching all related methods together (not real object, rather a "package") class Contacts extends Model { function create_contact() {) function read_contact() {} function update_contact() {} function delete_contact() {} function add_contact_to_group() {} function delete_contact_from_group() {} function create_group() {} function read_group() {} function update_group() {} function delete_group() {} } Models solution 2: the OOP way (one class per file) class Contact extends Model { private $name = ''; private $id = ''; function create_contact() {) function read_contact() {} function update_contact() {} function delete_contact() {} } class ContactGroup extends Model { private $name = ''; private $id = ''; function add_contact_to_group() {} function delete_contact_from_group() {} function create_group() {} function read_group() {} function update_group() {} function delete_group() {} } i dont know how to think when i want to create the models. and the above examples are my real tasks for creating an Address book. Should i just bunch all functions together in one class. then the class contains different logic (contact and group), so it can not hold properties that are specific for either one of them. the solution 2 works according to the OOP. but i dont know why i should make such a dividing. what would the benefits be to have a Contact object for example. Its surely not a User object, so why should a Contact "live" with its own state (properties and methods). you experienced guys with OOP/MVC, please shed a light on how one should think here in this very concrete task.

    Read the article

  • Android Signal analysis + some filters.

    - by Profete162
    Hello, as the world cup is the main sport event and the Vuvuzelas are the most annoying sound in the world, I had an idea to remove them definitively by reading this new ( http://www.popsci.com/diy/article/2010-06/simple-software-can-filter-out-vuvuzela-whine) that told us that the sound has some frequencies at 233Hz + 466,932,1864Hz. I have already made a lot of Android application by myself but never touching the signal analysis and filtering part, so here are a few questions, I do not ask for precise answer but maybe links and tutorial to find something to work on. I guess that a new Android phone has the CPU and power to make real-time filtering. 1) How can I intercept the sound coming from the Jack microphone - Line-IN plug- ( I plan to link my TV to my phone with Jack to Jack plug). My question is totally software and coding, I have all the wires and adapters to plug a jack into my android phone Line IN. 2) Are there some Fourier analysis librairies, may I have a look to Java libraries on the web and import them to my Android project? I really apologize because my question seem not precise, but I think that would be something great. Thank you for your answers.

    Read the article

  • Statistical analysis on large data set to be published on the web

    - by dassouki
    I have a non-computer related data logger, that collects data from the field. This data is stored as text files, and I manually lump the files together and organize them. The current format is through a csv file per year per logger. Each file is around 4,000,000 lines x 7 loggers x 5 years = a lot of data. some of the data is organized as bins item_type, item_class, item_dimension_class, and other data is more unique, such as item_weight, item_color, date_collected, and so on ... Currently, I do statistical analysis on the data using a python/numpy/matplotlib program I wrote. It works fine, but the problem is, I'm the only one who can use it, since it and the data live on my computer. I'd like to publish the data on the web using a postgres db; however, I need to find or implement a statistical tool that'll take a large postgres table, and return statistical results within an adequate time frame. I'm not familiar with python for the web; however, I'm proficient with PHP on the web side, and python on the offline side. users should be allowed to create their own histograms, data analysis. For example, a user can search for all items that are blue shipped between week x and week y, while another user can search for sort the weight distribution of all items by hour for all year long. I was thinking of creating and indexing my own statistical tools, or automate the process somehow to emulate most queries. This seemed inefficient. I'm looking forward to hearing your ideas Thanks

    Read the article

  • Why cant we create Object if constructor is in private section?

    - by Abhi
    Dear all I want to know why cant we create object if the constructor is in private section. I know that if i make a method static i can call that method using <classname> :: <methodname(...)>; But why cant we create object is my doubt... I also know if my method is not static then also i can call function by the following... class A { A(); public: void fun1(); void fun2(); void fun3(); }; int main() { A *obj =(A*)malloc(sizeof(A)); //Here we can't use new A() because constructor is in private //but we can use malloc with it, but it will not call the constructor //and hence it is harmful because object may not be in usable state. obj->fun1(); obj->fun2(); obj->fun3(); } So only doubt is why cant we create object when constructor is in private section? Thanks in advance

    Read the article

  • How do I add an object to a binary tree based on the value of a member variable?

    - by Max
    How can I get a specific value from an object? I'm trying to get a value of an instance for eg. ListOfPpl newListOfPpl = new ListOfPpl(id, name, age); Object item = newListOfPpl; How can I get a value of name from an Object item?? Even if it is easy or does not interest you can anyone help me?? Edited: I was trying to build a binary tree contains the node of ListOfPpl, and need to sort it in the lexicographic. Here's my code for insertion on the node. Any clue?? public void insert(Object item){ Node current = root; Node follow = null; if(!isEmpty()){ root = new Node(item, null, null); return; }boolean left = false, right = false; while(current != null){ follow = current; left = false; right = false; //I need to compare and sort it if(item.compareTo(current.getFighter()) < 0){ current = current.getLeft(); left = true; }else { current = current.getRight(); right = true; } }if(left) follow.setLeft(new Node(item, null, null)); else follow.setRight(new Node(item, null, null)); }

    Read the article

  • [javascript] Can I overload an object with a function?

    - by user257493
    Lets say I have an object of functions/values. I'm interested in overloading based on calling behavior. For example, this block of code below demonstrates what I wish to do. var main_thing = { initalized: false, something: "Hallo, welt!", something_else: [123,456,789], load: { sub1 : function() { //Some stuff }, sub2 : function() { //Some stuff }, all : function() { this.load.sub1(); this.load.sub2(); } } init: function () { this.initalized=true; this.something="Hello, world!"; this.something_else = [0,0,0]; this.load(); //I want this to call this.load.all() instead. } } The issue to me is that main_thing.load is assigned to an object, and to call main_thing.load.all() would call the function inside of the object (the () operator). What can I do to set up my code so I could use main_thing.load as an access the object, and main_thing.load() to execute some code? Or at least, similar behavior. Basically, this would be similar to a default constructor in other languages where you don't need to call main_thing.constructor(). If this isn't possible, please explain with a bit of detail.

    Read the article

  • Willy Rotstein on Analytics and Social Media in Retail

    - by sarah.taylor(at)oracle.com
    Recently I came across a presentation from Dan Zarrella on "The Science of Retweets. (http://www.slideshare.net/HubSpot/the-science-of-retweets-with-dan-zarrella). It is an insightful, fact-based analysis of how tweets propagate and what makes them successful. The analysis is of course very interesting for those of us interested Tweeting. However, what really caught my attention is how well it illustrates, form a very different angle, some of the issues I am discussing with retailers these days. In particular the opportunities that e-commerce and social media open to those retailers with the appetite and vision to tackle the associated analytical challenges. And these challenges are of course not straightforward.   In his presentation Dan introduces the concept of Observability, I haven't had the opportunity to discuss with Dan his specific definition for the term. However, in practical retail terms, I would say that it means that through social media (and other web channels such as search) we can analyze and track processes by measuring Indicators that were not measurable before. The focus is in identifying patterns across a large number of consumers rather than what a particular individual "Likes".   The potential impact for retailers is huge. It opens the opportunity to monitor changes in consumer preference  and plan the business accordingly. And you can do this almost "real time" rather than through infrequent surveys that provide a "rear view" picture of your consumer behaviour. For instance, you could envision identifying when a particular set of fashion styles are breaking out from the pack, and commit a re-buy. Or you could monitor when the preference for a specific mobile device has declined and hence markdowns should be considered; or how demand for a specific ready-made food typically flows across regions and manage the inventory accordingly. Search, blogging, website and store data may need to be considered in identifying these trends. The data volumes involved are huge (check Andrea Morgan's recent post on "Big Data" in retail) but so are the benefits. As Andrea says, for the first time we can start getting insight into "Why" the business is performing in a certain way rather than just reporting on what is happening. And it is not just about the data volumes. Tackling the challenge also calls for integrated planning systems that can bring data and insight into the context of the Decision Making process Buyers, Merchandisers and Supply Chain managers are following. I strongly believe that only when data and process come together you can move from the anecdotal to systematically improving business performance.   I would love to hear your opinions on these trends and where you think Retail is heading to exploit these topics - please email me: [email protected]

    Read the article

  • #SSAS #Tabular Workshop and Community Events in Netherlands and Denmark

    - by Marco Russo (SQLBI)
    Next week I will finally start the roadshow of the SSAS Tabular Workshop, a 2-day seminar about the new BISM Tabular model for Analysis Services that has been introduced in SQL Server 2012. During these roadshows, we always try to arrange some speeches at local community events in the evening - we already defined for Copenhagen, we have some logistic issue in Amsterdam that we're trying to solve. Here is the timetable: Netherlands SSAS Workshop in Amsterdam, NL – April 16-17, 2012 2-day seminar, I and Alberto will be the trainers for this event, register here We're trying to manage a Community event but we still don't have a confirmation, stay tuned        Denmark SSAS Workshop in Copenhagen, DK – April 26-27, 2012 2-day seminar, I and Alberto will be the trainers for this event, register here Community event on April 26, 2012 This event will run in Hellerup, at Microsoft venue All details available here: http://msbip.dk/events/26/msbip-mode-nr-5/ People from Sweden are welcome! Just register to this private group on LinkedIn in order to announce your presence, so we’ll know how many people will attend In community events we’ll deliver two speeches – here are the descriptions: Inside xVelocity (VertiPaq) PowerPivot and BISM Tabular models in Analysis Services share a great columnar-based database engine called xVelocity in-memory analytics engine (VertiPaq). If you want to improve performance and optimize memory used, you have to understand some basic principles about how this engine works, how data is compressed, and how you can design a data model for better optimization. Prepare yourself to change your mind. xVelocity optimization techniques might seem counterintuitive and are absolutely different than OLAP and SQL ones! Choosing between Tabular and Multidimensional You have a new project and you have to make an important decision upfront. Should you use Tabular or Multidimensional? It is not easy to answer, because sometime there is a clear choice, but most of the times both decisions might be correct, at least at the beginning. In this session we’ll help you making an informed decision, correctly evaluating pros and cons of each one according to common scenarios, considering both short-term and long-term consequences of your choice. I hope to meet many people in this first dates. We have many other events coming in May and June, including an online event (for US time zones), and you can also attend our PreCon Day at TechEd US in Orland (PRC06) or TechEd Europe in Amsterdam. I’ll be a good customer for airline companies in the next three months! I’m just sorry that I hadn’t time to write other articles in the last month, but I’m accumulating material that I will need to write down during some flight – stay tuned…

    Read the article

  • RoboCopy Log File Analysis

    - by BobJim
    Is it possible to analyse the log text file outputted from RoboCopy and extract the lines which are defined as "New Dir" and "Extra Dir"? I would like the line from the log contain all the details returned regarding this "New Dir" or "Extra Dir" The reason for completing this task is to understand how two folder structures have change over time. One version has been kept internally at the parent company, the second has been used by a consultancy. For your information I am using Windows 7.

    Read the article

  • Excel techniques for perfmon csv log file analysis

    - by Aszurom
    I have perfmon running against several servers, where I'm outputting to a .csv file data like CPU %time, memory bytes free, hard disk I/O metrics like s/write and writes/s. The ones graphing the SQL servers are also collecting SQL stats. The web servers are collecting .Net relevant stuff. I am aware of PAL, and used it as a template of what data to capture based on server type actually. I just don't think the output it generates is detailed or flexible enough - but it does a pretty remarkable job of parsing logs and making graphs. I'm borderline incompetent with Excel, so I'm hoping to be directed to some knowledge of how to take a perfmon output .csv and mine it in Excel to produce some numbers that are meaningful to me as a sysadmin. I could of course just pick a range of data and assemble a graph out of that and look for spikes and trends, but I'm convinced there is some technique to this that makes it more manageable than looking at a monsterous spreadsheet of numbers and trying to make graphs of it. Plus, it's pretty time consuming and not something I can do as a "take a glance at the servers" sort of routine. I'm graphing CPU, disk use, network b/sec, etc. in Cacti as well, which is nice for seeing big trends. The problem is that it is 5 minute averages, so a server could have a problem but it's intermittent and washes out in a 5 min average. What do you do with perfmon data that I could learn from?

    Read the article

  • Linux installation analysis

    - by blunders
    "Ending company IT Admin relationship" has a good checklist for taking over an existing IT system, but I'm wondering as it relates to Linux: What is the most effective way to assess the scope of existing custom configurations, installs, scripts, etc done? Is there any software that will check if the kernel, system files, etc mirror the default files for the version installed? At this point I don't know what distro of Linux the server (though using Netcraft I do know the server appears to be Linux) -- so it's possible without knowing that information that this would be a hard question to answer.

    Read the article

  • MySql Data Loss - post mortem analysis - RackSpace Cloud Server

    - by marfarma
    After a recent 'emergency migration' of a RS cloud server, the mysql databases on our server snapshot image proved to be days out of date from the backup date. And yet files that were uploaded through the impacted webapp had been written to the file system. Related metadata that was written to the database was lost, but the files themselves were backed-up. Once I was able to manually access the mysql data files before the mysql server started (server was configured to start mysql on boot), I was able to see that the update time for ib_logfile1, ib_logfile0 and ibdata1 was days old. As with this poster, mysql data loss after server crash, it's as if some caching controller had told the OS / mysql server that it had committed data that was still in cache, and it was lost instead of flushed. I can't quite wrap my head around how the uploaded files got written but the database data did not. I would have thought that any cache would have flushed system wide, rather than process by process. Any suggestions as to how this might have happened?

    Read the article

  • FAT filesystem analysis tool

    - by Andy
    I have a dump a FAT file system. Is there a windows tool I can use to analyse it, including: Provide basic information (sector size etc.) Validate the file system, basic corruption checking Allow the files and directory structure to be viewed and possibly edited (i.e mounting as a windows partition) Thanks, Andy

    Read the article

  • Windows File System Analysis

    - by bouvierr
    I am looking for a FREE tool to perform analyses on the NTFS file system of my Windows 7 PC. I want to easily see the amount of data distributed throught out the entire file system. The following applications seem very good, but they are not free and probably overkill for my requirements: FolderSizes 5 MailMeter Windows File System Reporting Tool I am aware that some applications (like Folder Size 2.5) can add a column in Windows Explorer to show the size of each folder, but I am looking for something more like a reporting tool. Thank you for your suggestions.

    Read the article

  • Server hang - data loss on reboot, post mortem analysis

    - by rovangju
    A development server I'm responsible for (ext3 on raid 5 w/Debian Squeeze) froze up over the weekend and I was forced to reset it, as in unresponsive from KVM/physical keyboard access, no eth devices responding, etc. Not even the backup process ran (Figures, the one time I don't check for confirmation) So after the reset, it turns out that every trace of disk IO activity that should have happened for a period of ~24H is completely gone. The log files have a big gap in the dates and times. As if the writes were never committed to disk, no processes seemed to have run. Luckily it was a weekend and nothing of value would have been lost and I don't suspect a hack. What can I do in post mortem to this event - to prevent it from ever happening again? I've seen this happen before on a completely different machine running FreeBSD. I am rounding up the disk checking tools right now - but there must be more going on! Mount options: /dev/sda1 on / type ext3 (rw,errors=remount-ro) Kernel: Linux dev 2.6.32-5-686-bigmem Disk/Inodes: 13%/3%

    Read the article

  • Forensic Analysis of the OOM-Killer

    - by Oddthinking
    Ubuntu's Out-Of-Memory Killer wreaked havoc on my server, quietly assassinating my applications, sendmail, apache and others. I've managed to learn what the OOM Killer is, and about its "badness" rules. While my machine is small, my applications are even smaller, and typically only half of my physical memory is in use, let alone swap-space, so I was surprised. I am trying to work out the culprit, but I don't know how to read the OOM-Killer logs. Can anyone please point me to a tutorial on how to read the data in the logs (what are ve, free and gen?), or help me parse these logs? Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): selecting to kill, queued 0, seq 1, exc 2326 0 goal 2326 0... Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): task ebb0c6f0, thg d33a1b00, sig 1 Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): selected 1, signalled 1, queued 1, seq 1, exc 2326 0 red 61795 745 Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): selecting to kill, queued 0, seq 2, exc 122 0 goal 383 0... Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): task ebb0c6f0, thg d33a1b00, sig 1 Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): selected 1, signalled 1, queued 1, seq 2, exc 383 0 red 61795 745 Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): task ebb0c6f0, thg d33a1b00, sig 2 Apr 20 20:03:27 EL135 kernel: OOM killed process watchdog (pid=14490, ve=13516) exited, free=43104 gen=24501. Apr 20 20:03:27 EL135 kernel: OOM killed process tail (pid=4457, ve=13516) exited, free=43104 gen=24502. Apr 20 20:03:27 EL135 kernel: OOM killed process ntpd (pid=10816, ve=13516) exited, free=43104 gen=24503. Apr 20 20:03:27 EL135 kernel: OOM killed process tail (pid=27401, ve=13516) exited, free=43104 gen=24504. Apr 20 20:03:27 EL135 kernel: OOM killed process tail (pid=29009, ve=13516) exited, free=43104 gen=24505. Apr 20 20:03:27 EL135 kernel: OOM killed process apache2 (pid=10557, ve=13516) exited, free=49552 gen=24506. Apr 20 20:03:27 EL135 kernel: OOM killed process apache2 (pid=24983, ve=13516) exited, free=53117 gen=24507. Apr 20 20:03:27 EL135 kernel: OOM killed process apache2 (pid=29129, ve=13516) exited, free=68493 gen=24508. Apr 20 20:03:27 EL135 kernel: OOM killed process sendmail-mta (pid=941, ve=13516) exited, free=68803 gen=24509. Apr 20 20:03:27 EL135 kernel: OOM killed process tail (pid=12418, ve=13516) exited, free=69330 gen=24510. Apr 20 20:03:27 EL135 kernel: OOM killed process python (pid=22953, ve=13516) exited, free=72275 gen=24511. Apr 20 20:03:27 EL135 kernel: OOM killed process apache2 (pid=6624, ve=13516) exited, free=76398 gen=24512. Apr 20 20:03:27 EL135 kernel: OOM killed process python (pid=23317, ve=13516) exited, free=94285 gen=24513. Apr 20 20:03:27 EL135 kernel: OOM killed process tail (pid=29030, ve=13516) exited, free=95339 gen=24514. Apr 20 20:03:28 EL135 kernel: OOM killed process apache2 (pid=20583, ve=13516) exited, free=101663 gen=24515. Apr 20 20:03:28 EL135 kernel: OOM killed process logger (pid=12894, ve=13516) exited, free=101694 gen=24516. Apr 20 20:03:28 EL135 kernel: OOM killed process bash (pid=21119, ve=13516) exited, free=101849 gen=24517. Apr 20 20:03:28 EL135 kernel: OOM killed process atd (pid=991, ve=13516) exited, free=101880 gen=24518. Apr 20 20:03:28 EL135 kernel: OOM killed process apache2 (pid=14649, ve=13516) exited, free=102748 gen=24519. Apr 20 20:03:28 EL135 kernel: OOM killed process grep (pid=21375, ve=13516) exited, free=132167 gen=24520. Apr 20 20:03:57 EL135 kernel: kill_signal(13516.0): selecting to kill, queued 0, seq 4, exc 4215 0 goal 4826 0... Apr 20 20:03:57 EL135 kernel: kill_signal(13516.0): task ede29370, thg df98b880, sig 1 Apr 20 20:03:57 EL135 kernel: kill_signal(13516.0): selected 1, signalled 1, queued 1, seq 4, exc 4826 0 red 189481 331 Apr 20 20:03:57 EL135 kernel: kill_signal(13516.0): task ede29370, thg df98b880, sig 2 Apr 20 20:04:53 EL135 kernel: kill_signal(13516.0): selecting to kill, queued 0, seq 5, exc 3564 0 goal 3564 0... Apr 20 20:04:53 EL135 kernel: kill_signal(13516.0): task c6c90110, thg cdb1a100, sig 1 Apr 20 20:04:53 EL135 kernel: kill_signal(13516.0): selected 1, signalled 1, queued 1, seq 5, exc 3564 0 red 189481 331 Apr 20 20:04:53 EL135 kernel: kill_signal(13516.0): task c6c90110, thg cdb1a100, sig 2 Apr 20 20:07:14 EL135 kernel: kill_signal(13516.0): selecting to kill, queued 0, seq 6, exc 8071 0 goal 8071 0... Apr 20 20:07:14 EL135 kernel: kill_signal(13516.0): task d7294050, thg c03f42c0, sig 1 Apr 20 20:07:14 EL135 kernel: kill_signal(13516.0): selected 1, signalled 1, queued 1, seq 6, exc 8071 0 red 189481 331 Apr 20 20:07:14 EL135 kernel: kill_signal(13516.0): task d7294050, thg c03f42c0, sig 2 Watchdog is a watchdog task, that was idle; nothing in the logs to suggest it had done anything for days. Its job is to restart one of the applications if it dies, so a bit ironic that it is the first to get killed. Tail was monitoring a few logs files. Unlikely to be consuming memory madly. The apache web-server only serves pages to a little old lady who only uses it to get to church on Sundays a couple of developers who were in bed asleep, and hadn't visited a page on the site for a few weeks. The only traffic it might have had is from the port-scanners; all the content is password-protected and not linked from anywhere, so no spiders are interested. Python is running two separate custom applications. Nothing in the logs to suggest they weren't humming along as normal. One of them was a relatively recent implementation, which makes suspect #1. It doesn't have any data-structures of any significance, and normally uses only about 8% of the total physical RAW. It hasn't misbehaved since. The grep is suspect #2, and the one I want to be guilty, because it was a once-off command. The command (which piped the output of a grep -r to another grep) had been started at least 30 minutes earlier, and the fact it was still running is suspicious. However, I wouldn't have thought grep would ever use a significant amount of memory. It took a while for the OOM killer to get to it, which suggests it wasn't going mad, but the OOM killer stopped once it was killed, suggesting it may have been a memory-hog that finally satisfied the OOM killer's blood-lust.

    Read the article

  • Confusion about TCP packet analysis terms

    - by Berkay
    I'm analyzing our network and have some confusion about the terms: this is the 2-packet output from source to destination. from these i have to get some features as describe, pls make me clear... packets with at least a bytes of TCP data payload: it seems tcp.len0; The minimum segment size (confusion is headers are included or or not) The average segment size observed during the lifetime of the connection, the definition: is calculated as the value reported in the actual data bytes divided by the actual data pkts reported. Total bytes in IP packets, should be ip_len value. Total bytes in (Ethernet) The total number of bytes sent probably related to frame.len and frame.cap_len these two terms are describes as, also make me clear about these two terms. frame.cap_len: Frame length stored into the capture file frame.len: Frame length on the wire

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >