Search Results

Search found 42735 results on 1710 pages for 'open market model'.

Page 348/1710 | < Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >

  • Lines Concatenation between two files.

    - by Nano HE
    file1.txt hello tom well file2.txt world jerry done How to merge file1.txt with file2.txt; then create a new file - file3.txt hello world tom jerry well done thank you for reading and reply. Attached the completed code. #!/usr/bin/perl use strict; use warnings; open(F1,"<","1.txt") or die "Cannot open file1:$!\n"; open(F2,"<","2.txt") or die "Cannot open file2:$!\n"; open (MYFILE, '>>3.txt'); while(<F1>){ chomp; chomp(my $f2=<F2>); print MYFILE $_ . $f2 ."\n"; }

    Read the article

  • Check whether Excel file is Password protected

    - by Torben Klein
    I am trying to open an Excel (xlsm) file via VBA. It may or may not be protected with a (known) password. I am using this code: On Error Resume Next Workbooks.Open filename, Password:=user_entered_pw opened = (Err.Number=0) On Error Goto 0 Now, this works fine if the workbook has a password. But if it is unprotected, it can NOT be opened. Apparently this is a bug in XL2007 if there is also workbook structure protection active. (http://vbaadventures.blogspot.com/2009/01/possible-error-in-excel-2007.html). On old XL2003, supplying a password would open both unprotected and password protected file. I tried: Workbooks.Open filename, Password:=user_entered_pw If (Err.Number <> 0) Then workbooks.open filename This works for unprotected and protected file. However if the user enters a wrong password it runs into the second line and pops up the "enter password" prompt, which I do not want. How to get around this?

    Read the article

  • Check wether Excel file is Password protected

    - by Torben Klein
    I am trying to open an Excel (xlsm) file via VBA. It may or may not be protected with a (known) password. I am using this code: On Error Resume Next Workbooks.Open filename, Password:=user_entered_pw opened = (Err.Number=0) On Error Goto 0 Now, this works fine if the workbook has a password. But if it is unprotected, it can NOT be opened. Apparently this is a bug in XL2007 if there is also workbook structure protection active. (http://vbaadventures.blogspot.com/2009/01/possible-error-in-excel-2007.html). On old XL2003, supplying a password would open both unprotected and password protected file. I tried: Workbooks.Open filename, Password:=user_entered_pw If (Err.Number <> 0) Then workbooks.open filename This works for unprotected and protected file. However if the user enters a wrong password it runs into the second line and pops up the "enter password" prompt, which I do not want. How to get around this?

    Read the article

  • sequence generators are getting ignored

    - by luvfort
    I'm getting the following error while saving a object. However similar configuration is working for other model objects in my projects. Any help would be greatly appreciated. @Entity @Table(name = "ENROLLMENT_GROUP_MEMBERSHIPS", schema = "LEAD_ROUTING") public class EnrollmentGroupMembership implements Serializable, Comparable,Auditable { @javax.persistence.SequenceGenerator(name = "enrollmentGroupMemID", sequenceName = "S_ENROLLMENT_GROUP_MEMBERSHIPS") @Id @GeneratedValue(strategy = GenerationType.AUTO, generator = "enrollmentGroupMemID") @Column(name = "ID") private Long id; @ManyToOne() @JoinColumn(name = "TIER_WEIGHT_OID", referencedColumnName = "OID", updatable = false, insertable = false) private TierWeight tierWeight; public EnrollmentGroupMembership() { } } Code: @Entity @Table(name = "TIER_WEIGHT", schema = "LEAD_ROUTING") public class TierWeight implements Serializable, Auditable { @SequenceGenerator(name = "tierSequence",sequenceName = "S_TIER_WEIGHT") @Column(name = "OID") @Id @GeneratedValue(strategy = GenerationType.AUTO, generator = "tierSequence") private Long id; @OneToMany @JoinColumn(name = "TIER_WEIGHT_OID", referencedColumnName = "OID") private Set<EnrollmentGroupMembership> memberships; public TierWeight() { } } The logic layer's code is @Override public void createTier(String tierName, float weight) { TierWeight tier = new TierWeight(); tier.setWeight(weight); tier.setTier(tierName); tierWeightDAO.create(tier); } Similar Many-one configuration is working through out the project. I don't know why this one instance is failing. Any help would be greatly appreciated. The following is the error that I'm getting Caused by: org.hibernate.id.IdentifierGenerationException: ids for this class must be manually assigned before calling save(): edu.apollogrp.d2ec.model.TierWeight at org.hibernate.id.Assigned.generate(Assigned.java:3 3) at org.hibernate.event.def.AbstractSaveEventListener. saveWithGeneratedId(AbstractSaveEventListener.java :99) The log file is telling that the sequence generator tierSequence is not getting created. However other sequence generators are getting created. 2010-06-03 11:24:51,834 DEBUG [org.hibernate.cfg.AnnotationBinder:] Processing annotations of edu.apollogrp.d2ec.model.TierWeight.dateCreated 2010-06-03 11:24:51,834 DEBUG [org.hibernate.cfg.AnnotationBinder:] Processing annotations of edu.apollogrp.d2ec.model.TierWeight.dateCreated 2010-06-03 11:24:51,834 DEBUG [org.hibernate.cfg.Ejb3Column:] Binding column DATE_CREATED unique false ....................................... ....................................... 2010-06-03 11:24:51,756 DEBUG [org.hibernate.cfg.AnnotationBinder:] Processing annotations of edu.apollogrp.d2ec.model.CounselorAvailability.id 2010-06-03 11:24:51,756 DEBUG [org.hibernate.cfg.Ejb3Column:] Binding column OID unique false 2010-06-03 11:24:51,756 DEBUG [org.hibernate.cfg.Ejb3Column:] Binding column OID unique false 2010-06-03 11:24:51,756 DEBUG [org.hibernate.cfg.AnnotationBinder:] id is an id 2010-06-03 11:24:51,756 DEBUG [org.hibernate.cfg.AnnotationBinder:] id is an id 2010-06-03 11:24:51,756 DEBUG [org.hibernate.cfg.AnnotationBinder:] Add sequence generator with name: counselorAvailabilityID 2010-06-03 11:24:51,756 DEBUG [org.hibernate.cfg.AnnotationBinder:] Add sequence generator with name: counselorAvailabilityID While debugging, I see that the org.hibernate.impl.SessionFactoryImpl is returning the "Assigned" identifierGenerator. This is horrible. I've specified the identifierGenerator as "Auto". Please see the above code. As a sidenote, I was trying to debug and seeing how the objects are getting retrieved from the database. Looks like the enrollmentgroupmembership records have the tierweight value populated. However if I look at the tierweight object, it doesn't have the enrollmentgroupmembership records. I'm puzzled. I think these two problems must be related. Maddy.

    Read the article

  • JPA entity design / cannot delete entity

    - by timaschew
    I though its simple what I want, but I cannot find any solution for my problem. I'm using playframework 1.2.3 and it's using Hibernate as JPA. So I think playframework has nothing to do with the problem. I have some classes (I omit the nonrelevant fields) public class User { ... } public class Task { public DataContainer dataContainer; } public class DataContainer { public Session session; public User user; } public class Session { ... } So I have association from Task to DataContainer and from DataContainer to Sesssion and the DataContainer belongs to a User. The DataContainers can have always the same User, but the Session have to be different for each instance. And the DataContainer of a Task have also to be different in each instance. A DataContainer can have a Sesesion or not (it's optinal). I use only unidirectional assoc. It should be sufficient. In other words: Every Task must has one DataContainer. Every DataContainer must has one/the same User and can have one Session. To create a DB schema I use JPA annotations: @Entity public class User extends Model { ... } @Entity public class Task extends Model { @OneToOne(optional = false, cascade = CascadeType.ALL) public DataContainer dataContainer; } @Entity public class DataContainer extends Model { @OneToOne(optional = true, cascade = CascadeType.ALL) public Session session; @ManyToOne(optional = false, cascade = CascadeType.ALL) public User user; } @Entity public class Session extends Model { ... } BTW: Model is a play class and provides the primary id as long type. When I create some for each entity a object and 'connect them', I mean the associations, it works fine. But when I try to delete a Session, I get a constraint violation exception, because a DataContainer still refers to the Session I want to delete. I want that the Session (field) of the DataContainer will be set to null respectively the foreign key (session_id) should be unset in the database. This will be okay, because its optional. I don't know, I think I have multiple problems. Am I using the right annotation @OneToOne ? I found on the internet some additional annotation and attributes: @JoinColumn and a mappedBy attribute for the inverse relationship. But I don't have it, because its not bidirectional. Or is a bidirectional assoc. essentially? Another try was to use @OnDelete(action = OnDeleteAction.CASCADE) the the contraint changed from NO ACTIONs when update or delete to: ADD CONSTRAINT fk4745c17e6a46a56 FOREIGN KEY (session_id) REFERENCES annotation_session (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE; But in this case, when I delete a session, the DataContainer and User is deleted. That's wrong for me. EDIT: I'm using postgresql 9, the jdbc stuff is included in play, my only db config is db=postgres://app:app@localhost:5432/app

    Read the article

  • Using one data source across multiple views in Kendo UI SPA

    - by user3731783
    I am trying to build a Kendo UI SPA. I have two views. View 1 (appListView) shows Application Details in a grid and view 2 (activityView) will have a dropdown for application names and a grid that shows the activity for selected application As I am loading all the application details on the loading of view 1, I would like to re-use those details to populate the dropdown on view 2. Please see my code below. Everything works fine but when I go to View 2 it makes a call to the service again to get application details. I would like to use the existing data if it is already loaded and if the uses comes to view 2 directly then it should get application data also. I am not sure what I am missing in the code. View Markup: <script id="appListView" type="text/x-kendo-template"> <h3 data-bind="html: displayName"></h3> <div data-role="grid" data-editable="{'mode':'popup'}" data-bind="source: items" data-columns="[ {'field': 'Name'}, {'field': 'ContactEmail','title':'Contact Email'} ]"> </div> </script> <script id="" type="text\x-kendo-template"> <div> Activity for Application&nbsp;&nbsp; <input name="AppName" data-role="dropdownlist" data-source="appsModel.items" data-text-field="Name" data-value-field="Id" data-option-label="Choose an application name" style="width:250px;" /> </div> <div id="Activities" data-role="grid" data-bind="source: items" data-auto-bind="false" data-columns="[ {'field': 'Domain','title':'Domain'}, {'field': 'ActivityType','title':'Activity Type'} ]"> </div> </script> js with DataSource and View Model: //data sources var applications = new kendo.data.DataSource({ schema: { model: { id: "Id" } }, serverFiltering : true, transport: { read: { url: '/api/App', dataType: 'json', type:'GET' } } }); var activities = new kendo.data.DataSource({ schema: { model: { id: "Id" } }, transport: { read: { url: '/api/Activity', dataType: 'json', type: 'GET' }, parameterMap: function (data, type) { if (type == "read") { return 'appId=' + $("#AppName").val() ; } } } }); //Models var appsModel = kendo.observable({ items: applications, displayName: 'My Applications' }); var activityModel = kendo.observable({ items: activities, onAppChange: function(t){ $("#Activities").data("kendoGrid").dataSource.read(); }, dispayName: 'Application Activities' }); //views var layout = new kendo.Layout("layout-template"); var appListView = new kendo.View("appListView", { model: appsModel }); var activityView = new kendo.View("activityView", { model: activityModel }); Thank you for taking time to read this long question.

    Read the article

  • MVC Bootstrap: Autocomplete doesn't show properly

    - by kicked11
    I have a MVC website and it had a searchbox with autocomplete, now I changed the layout using bootstrap. But now the autocomplete isn't been shown correctly anymore. See the picture the suggestions are not shown right. the autocomplete goes through the text. I was fine before I used bootstrap. I am using a SQL server to get the data and this is js file (I'm not good at ajax, i took it from a tutorial I followed) $(function () { var ajaxFormSubmit = function () { var $form = $(this); var options = { url: $form.attr("action"), type: $form.attr("method"), data: $form.serialize() }; $.ajax(options).done(function (data) { var $target = $($form.attr("data-aptitude-target")); var $newHtml = $(data); $target.replaceWith($newHtml); $newHtml.show("slide", 200); }); return false; }; var submitAutocompleteForm = function (event, ui) { var $input = $(this); $input.val(ui.item.label); var $form = $input.parents("form:first"); $form.submit(); }; var createAutocomplete = function () { var $input = $(this); var options = { source: $input.attr("data-aptitude-autocomplete"), select: submitAutocompleteForm }; $input.autocomplete(options); }; $("form[data-aptituder-ajax='true']").submit(ajaxFormSubmit); $("input[data-aptitude-autocomplete]").each(createAutocomplete); }); this is the form in my view <form method="get" action="@Url.Action("Index")" data-aptitude-ajax="true" data-aptitude-target="#testList"> <input type="search" name="searchTerm" data-aptitude-autocomplete="@Url.Action("Autocomplete")" /> <input type="submit" value=Search /> And this is part of the controller of the view public ActionResult Autocomplete(string term) { var model = db.tests .Where(r => term == null || r.name.Contains(term)) .Select(r => new { label = r.name }); return Json(model, JsonRequestBehavior.AllowGet); } // // GET: /Test/ public ActionResult Index(string searchTerm = null) { var model = db.tests .Where(r => searchTerm == null || r.name.StartsWith(searchTerm)); if (Request.IsAjaxRequest()) { return PartialView("_Test", model); } return View(model); } I'm new to ajax as well as bootstrap 3. I got the searchfunction and autocomplete from a tutorial I followed. Anybody any idea on how to fix this, because it worked fine? Thanks in advance!

    Read the article

  • Contricted A* problem

    - by Ragekit
    I've got a little problem with an A* algorithm that I need to constrict a little bit. Basically : I use an A* to find the shortest path between 2 randomly placed room in 3D space, and then build a corridor between them. The problem I found is that sometimes it makes chimney like corridors that are not ideal, so I constrict the A* so that if the last movement was up or down, you go sideways. Everything is fine, but in some corner cases, it fails to find a path (when there is obviously one). Like here between the blue and red dot : (i'm in unity btw, but i don't think it matters) Here is the code of the actual A* (a bit long, and some redundency) while(current != goal) { //add stair up / stair down foreach(Node<GridUnit> test in current.Neighbors) { if(!test.Data.empty && test != goal) continue; //bug at arrival; if(test == goal && penul !=null) { Vector3 currentDiff = current.Data.bounds.center - test.Data.bounds.center; if(!Mathf.Approximately(currentDiff.y,0)) { //wanna drop on the last if(!coplanar(test.Data.bounds.center,current.Data.bounds.center,current.Data.parentUnit.bounds.center,to.Data.bounds.center)) { continue; } else { if(Mathf.Approximately(to.Data.bounds.center.x, current.Data.parentUnit.bounds.center.x) && Mathf.Approximately(to.Data.bounds.center.z, current.Data.parentUnit.bounds.center.z)) { continue; } } } } if(current.Data.parentUnit != null) { Vector3 previousDiff = current.Data.parentUnit.bounds.center - current.Data.bounds.center; Vector3 currentDiff = current.Data.bounds.center - test.Data.bounds.center; if(!Mathf.Approximately(previousDiff.y,0)) { if(!Mathf.Approximately(currentDiff.y,0)) { //you wanna drop now : continue; } if(current.Data.parentUnit.parentUnit != null) { if(!coplanar(test.Data.bounds.center,current.Data.bounds.center,current.Data.parentUnit.bounds.center,current.Data.parentUnit.parentUnit.bounds.center)) { continue; }else { if(Mathf.Approximately(test.Data.bounds.center.x, current.Data.parentUnit.parentUnit.bounds.center.x) && Mathf.Approximately(test.Data.bounds.center.z, current.Data.parentUnit.parentUnit.bounds.center.z)) { continue; } } } } } g = current.Data.g + HEURISTIC(current.Data,test.Data); h = HEURISTIC(test.Data,goal.Data); f = g + h; if(open.Contains(test) || closed.Contains(test)) { if(test.Data.f > f) { //found a shorter path going passing through that point test.Data.f = f; test.Data.g = g; test.Data.h = h; test.Data.parentUnit = current.Data; } } else { //jamais rencontré test.Data.f = f; test.Data.h = h; test.Data.g = g; test.Data.parentUnit = current.Data; open.Add(test); } } closed.Add (current); if(open.Count == 0) { Debug.Log("nothingfound"); //nothing more to test no path found, stay to from; List<GridUnit> r = new List<GridUnit>(); r.Add(from.Data); return r; } //sort open from small to biggest travel cost open.Sort(delegate(Node<GridUnit> x, Node<GridUnit> y) { return (int)(x.Data.f-y.Data.f); }); //get the smallest travel cost node; Node<GridUnit> smallest = open[0]; current = smallest; open.RemoveAt(0); } //build the path going backward; List<GridUnit> ret = new List<GridUnit>(); if(penul != null) { ret.Insert(0,to.Data); } GridUnit cur = goal.Data; ret.Insert(0,cur); do{ cur = cur.parentUnit; ret.Insert(0,cur); } while(cur != from.Data); return ret; You see at the start of the foreach i constrict the A* like i said. If you have any insight it would be cool. Thanks

    Read the article

  • Is a many-to-many relationship with extra fields the right tool for my job?

    - by whichhand
    Previously had a go at asking a more specific version of this question, but had trouble articulating what my question was. On reflection that made me doubt if my chosen solution was correct for the problem, so this time I will explain the problem and ask if a) I am on the right track and b) if there is a way around my current brick wall. I am currently building a web interface to enable an existing database to be interrogated by (a small number of) users. Sticking with the analogy from the docs, I have models that look something like this: class Musician(models.Model): first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) dob = models.DateField() class Album(models.Model): artist = models.ForeignKey(Musician) name = models.CharField(max_length=100) class Instrument(models.Model): artist = models.ForeignKey(Musician) name = models.CharField(max_length=100) Where I have one central table (Musician) and several tables of associated data that are related by either ForeignKey or OneToOneFields. Users interact with the database by creating filtering criteria to select a subset of Musicians based on data the data on the main or related tables. Likewise, the users can then select what piece of data is used to rank results that are presented to them. The results are then viewed initially as a 2 dimensional table with a single row per Musician with selected data fields (or aggregates) in each column. To give you some idea of scale, the database has ~5,000 Musicians with around 20 fields of related data. Up to here is fine and I have a working implementation. However, it is important that I have the ability for a given user to upload there own annotation data sets (more than one) and then filter and order on these in the same way they can with the existing data. The way I had tried to do this was to add the models: class UserDataSets(models.Model): user = models.ForeignKey(User) name = models.CharField(max_length=100) description = models.CharField(max_length=64) results = models.ManyToManyField(Musician, through='UserData') class UserData(models.Model): artist = models.ForeignKey(Musician) dataset = models.ForeignKey(UserDataSets) score = models.IntegerField() class Meta: unique_together = (("artist", "dataset"),) I have a simple upload mechanism enabling users to upload a data set file that consists of 1 to 1 relationship between a Musician and their "score". Within a given user dataset each artist will be unique, but different datasets are independent from each other and will often contain entries for the same musician. This worked fine for displaying the data, starting from a given artist I can do something like this: artist = Musician.objects.get(pk=1) dataset = UserDataSets.objects.get(pk=5) print artist.userdata_set.get(dataset=dataset.pk) However, this approach fell over when I came to implement the filtering and ordering of query set of musicians based on the data contained in a single user data set. For example, I could easily order the query set based on all of the data in the UserData table like this: artists = Musician.objects.all().order_by(userdata__score) But that does not help me order by the results of a given single user dataset. Likewise I need to be able to filter the query set based on the "scores" from different user data sets (eg find all musicians with a score 5 in dataset1 and < 2 in dataset2). Is there a way of doing this, or am I going about the whole thing wrong?

    Read the article

  • file_operations Question, how do i know if a process that opened a file for writing has decided to c

    - by djTeller
    Hi Kernel Gurus, I'm currently writing a simple "multicaster" module. Only one process can open a proc filesystem file for writing, and the rest can open it for reading. To do so i use the inode_operation .permission callback, I check the operation and when i detect someone open a file for writing I set a flag ON. i need a way to detect if a process that opened a file for writing has decided to close the file so i can set the flag OFF, so someone else can open for writing. Currently in case someone is open for writing i save the current-pid of that process and when the .close callback is called I check if that process is the one I saved earlier. Is there a better way to do that? Without saving the pid, perhaps checking the files that the current process has opened and it's permission... Thanks!

    Read the article

  • MVC how to implement two different post actions

    - by AnonyMouse
    I'm developing this really important squirrel application. There is a wizard where squirrels are added to the database. So say there are three screens to this wizard: Squirrel name details Height and weight Nut storage So at each step of the wizard I'm wanting to save the details to the database. The Height and weight view looks like: @model HeightWeightViewModel @{ ViewBag.Title = "Height and weight"; } <h2>Height and weight</h2> @using (Html.BeginForm()) { <h3>Height</h3> <div> @Html.EditorFor(model => model.Squirrel.Height) </div> <h3>Weight</h3> <div> @Html.EditorFor(model => model.Squirrel.Weight) </div> <input type="submit" value="Previous" /> <input type="submit" value="Next" /> } So I'm hoping that Previous and Next buttons will save these details. The Previous button while saving will also take the user to the Squirrel name details page. The Next will save and take the user to the nut storage page. I got the Next button working using: public ActionResult Edit(SquirrelViewModel squirrelViewModel) { _unitOfWork.SaveHeightWeight(squirrelViewModel); return RedirectToAction("Edit", "NutStorage", new { id = squirrelViewModel.Squirrel.Id }); } So the Next button saves the details and sends the user to the NutStorage page. The Previous button does the same as Next but I actually want it to send the user to the first step of the Wizard after saving. I'm not sure how to do this. Would I have another method to post to for Previous? I can't image how to implement this. Maybe I should be using ActionLinks instead of submit buttons but that would not post the details to be saved. Can anyone suggest how to get the previous button to save and send the user to the first page of the wizard while still having the Next functionality working?

    Read the article

  • How can i fix this jquery script for submenu?

    - by Giulio Bambini
    This is the code and this is the result: www.studionews24.com you can see test the menu by clicking inside. $(document).ready(function () { var $links = $('#menu-menu-1 .menu-item a').click(function () { var submenu = $(this).next(); $subs.not(submenu).hide() submenu.toggle(500); $("#menu-2").slideToggle(300); }); var $subs = $links.next(); }); The problem is that if I click on the menu it appears the submenu, but if I don't close the submenu that I have open and I open another voice in the menu it doesn't work properly. If I click another .menu-item a when #menu-2 is open what happens? The script close and open another submenu correctly but my #menu-2 close only close. And then if I close .menu-item a so... #menu-2 open. How can I do to fix?

    Read the article

  • CodeIgniter Error Log Info + Errors

    - by fatnjazzy
    Hi, IS there a way to save in the log, Info + Errors without debug? Howcome debug level apears with info? If i want to log info "Account id 4345 was deleted by Admin", why do i need to see all of these: DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Config Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Hooks Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 URI Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Router Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Output Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Input Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Global POST and COOKIE data sanitized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Language Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Loader Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Config file loaded: config/safe_charge.php DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Config file loaded: config/web_fx.php DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Helper loaded: loadutils_helper DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Helper loaded: objectsutils_helper DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Helper loaded: logutils_helper DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Helper loaded: password_helper DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Database Driver Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 cURL Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Language Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Config Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Account MX_Controller Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 File loaded: ./modules/accounts/models/pending_account_model.php DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Model Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 File loaded: ./modules/accounts/models/proccess_accounts_model.php DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Model Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 File loaded: ./modules/accounts/models/web_fx_model.php DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Model Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 File loaded: ./modules/accounts/models/trader_account_type_spreads.php DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Model Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 File loaded: ./modules/accounts/models/trader_accounts.php DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Model Class Initialized Thanks

    Read the article

  • Constrained A* problem

    - by Ragekit
    I've got a little problem with an A* algorithm that I need to Constrained a little bit. Basically : I use an A* to find the shortest path between 2 randomly placed room in 3D space, and then build a corridor between them. The problem I found is that sometimes it makes chimney like corridors that are not ideal, so I constrict the A* so that if the last movement was up or down, you go sideways. Everything is fine, but in some corner cases, it fails to find a path (when there is obviously one). Like here between the blue and red dot : (i'm in unity btw, but i don't think it matters) Here is the code of the actual A* (a bit long, and some redundency) while(current != goal) { //add stair up / stair down foreach(Node<GridUnit> test in current.Neighbors) { if(!test.Data.empty && test != goal) continue; //bug at arrival; if(test == goal && penul !=null) { Vector3 currentDiff = current.Data.bounds.center - test.Data.bounds.center; if(!Mathf.Approximately(currentDiff.y,0)) { //wanna drop on the last if(!coplanar(test.Data.bounds.center,current.Data.bounds.center,current.Data.parentUnit.bounds.center,to.Data.bounds.center)) { continue; } else { if(Mathf.Approximately(to.Data.bounds.center.x, current.Data.parentUnit.bounds.center.x) && Mathf.Approximately(to.Data.bounds.center.z, current.Data.parentUnit.bounds.center.z)) { continue; } } } } if(current.Data.parentUnit != null) { Vector3 previousDiff = current.Data.parentUnit.bounds.center - current.Data.bounds.center; Vector3 currentDiff = current.Data.bounds.center - test.Data.bounds.center; if(!Mathf.Approximately(previousDiff.y,0)) { if(!Mathf.Approximately(currentDiff.y,0)) { //you wanna drop now : continue; } if(current.Data.parentUnit.parentUnit != null) { if(!coplanar(test.Data.bounds.center,current.Data.bounds.center,current.Data.parentUnit.bounds.center,current.Data.parentUnit.parentUnit.bounds.center)) { continue; }else { if(Mathf.Approximately(test.Data.bounds.center.x, current.Data.parentUnit.parentUnit.bounds.center.x) && Mathf.Approximately(test.Data.bounds.center.z, current.Data.parentUnit.parentUnit.bounds.center.z)) { continue; } } } } } g = current.Data.g + HEURISTIC(current.Data,test.Data); h = HEURISTIC(test.Data,goal.Data); f = g + h; if(open.Contains(test) || closed.Contains(test)) { if(test.Data.f > f) { //found a shorter path going passing through that point test.Data.f = f; test.Data.g = g; test.Data.h = h; test.Data.parentUnit = current.Data; } } else { //jamais rencontré test.Data.f = f; test.Data.h = h; test.Data.g = g; test.Data.parentUnit = current.Data; open.Add(test); } } closed.Add (current); if(open.Count == 0) { Debug.Log("nothingfound"); //nothing more to test no path found, stay to from; List<GridUnit> r = new List<GridUnit>(); r.Add(from.Data); return r; } //sort open from small to biggest travel cost open.Sort(delegate(Node<GridUnit> x, Node<GridUnit> y) { return (int)(x.Data.f-y.Data.f); }); //get the smallest travel cost node; Node<GridUnit> smallest = open[0]; current = smallest; open.RemoveAt(0); } //build the path going backward; List<GridUnit> ret = new List<GridUnit>(); if(penul != null) { ret.Insert(0,to.Data); } GridUnit cur = goal.Data; ret.Insert(0,cur); do{ cur = cur.parentUnit; ret.Insert(0,cur); } while(cur != from.Data); return ret; You see at the start of the foreach i constrict the A* like i said. If you have any insight it would be cool. Thanks

    Read the article

  • Cakephp beforeFind() - How do I add a JOIN condition AFTER the belongsTo association is added?

    - by michael
    I'm in Model-beforeFind($queryData), trying to add a JOIN condition to the queryData on a model which has belongsTo associations. Unfortunately, the new JOIN references a table in the belongsTo association, so it must appear AFTER the belongsTo in the query. Here is my Tagged-belongsTo association: app\plugins\tags\models\tagged.php (line 192) Array ( [Tag] => Array ( [className] => Tag [foreignKey] => tag_id [conditions] => [fields] => [order] => [counterCache] => ) [Group] => Array ( [className] => Group [foreignKey] => foreign_key [conditions] => Array ( [Tagged.model] => Group ) [fields] => [order] => [counterCache] => ) ) Here is the JOIN added in Tagged-beforeFind(), notice that the belongsTo joins have not yet been added: app\plugins\tags\models\tagged.php (line 194) Array ( [conditions] => Array ( [Tag.keyname] => europe ) [fields] => Array ( [0] => DISTINCT Group.* [1] => GroupPermission.* ) [joins] => Array ( [0] => Array ( [table] => permissions [alias] => GroupPermission [foreignKey] => [type] => INNER [conditions] => Array ( [GroupPermission.model] => Group [0] => GroupPermission.foreignId = Group.id [or] => Array ( ... ) ) ) ) [limit] => [offset] => [order] => Array ( [0] => ) [page] => 1 [group] => [callbacks] => 1 [by] => europe [model] => Group ) When I check the output, it fails with "1054: Unknown column 'Group.id' in 'on clause'" because the Permissions join appeared BEFORE the Groups join. SELECT DISTINCT `Group`.*, `GroupPermission`.* FROM `tagged` AS `Tagged` INNER JOIN permissions AS `GroupPermission` ON (`GroupPermission`.`model` = 'Group' AND `GroupPermission`.`foreignId` = `Group`.`id` AND (...)) LEFT JOIN `tags` AS `Tag` ON (`Tagged`.`tag_id` = `Tag`.`id`) LEFT JOIN `groups` AS `Group` ON (`Tagged`.`foreign_key` = `Group`.`id` AND `Tagged`.`model` = 'Group') WHERE `Tag`.`keyname` = 'europe' But this SQL (with Permissions joined moved to the end) works fine: SELECT DISTINCT `Group`.*, `GroupPermission`.* FROM `tagged` AS `Tagged` LEFT JOIN `tags` AS `Tag` ON (`Tagged`.`tag_id` = `Tag`.`id`) LEFT JOIN `groups` AS `Group` ON (`Tagged`.`foreign_key` = `Group`.`id` AND `Tagged`.`model` = 'Group') INNER JOIN permissions AS `GroupPermission` ON (`GroupPermission`.`model` = 'Group' AND `GroupPermission`.`foreignId` = `Group`.`id` AND (...)) WHERE `Tag`.`keyname` = 'europe' How do I add my join in beforeFind() after the belongsTo join?

    Read the article

  • Test plans and how best to write them

    - by Karim
    We're trying to figure out the best way to write tests in our test plan. Specifically, when writing a test that is meant to be used by anyone including QA staff, should the steps in the test be very specific or more broad giving the tester more leeway in how the task can be accomplished. As a very simple example, if you're testing opening a document in word processing document, should the test read: Using the mouse, open the file menu Choose "Open File..." in the file menu In the open file dialog that appears, navigate to x and double-click the document called y OR Bring up the file open dialog Open the file y Now I realize one answer is probably going to be "it depends on what you're trying to test" but I'm trying to answer a broader question here: If the test steps are too specific do we risk a) making the testing process to laborious and tedious and more importantly b) do we risk missing something because we wrote down too specific a path to achieve a goal. Alternatively, if we make it broad do we depend too much on the whims of the tester at the time and lose crucial testing of paths that are more common to customers/clients?

    Read the article

  • Python text file processing speed issues

    - by Anonymouslemming
    Hi all, I'm having a problem with processing a largeish file in Python. All I'm doing is f = gzip.open(pathToLog, 'r') for line in f: counter = counter + 1 if (counter % 1000000 == 0): print counter f.close This takes around 10m25s just to open the file, read the lines and increment this counter. In perl, dealing with the same file and doing quite a bit more (some regular expression stuff), the whole process takes around 1m17s. Perl Code: open(LOG, "/bin/zcat $logfile |") or die "Cannot read $logfile: $!\n"; while (<LOG>) { if (m/.*\[svc-\w+\].*login result: Successful\.$/) { $_ =~ s/some regex here/$1,$2,$3,$4/; push @an_array, $_ } } close LOG; Can anyone advise what I can do to make the Python solution run at a similar speed to the Perl solution? I've tried just uncompressing the file and dealing with it using open instead of gzip.open, but that made a very small difference to the overall time.

    Read the article

  • Meshing different systems of keys together in XML Schema

    - by Tom W
    Hello SO, I'd like to ask people's thoughts on an XSD problem I've been pondering. The system I am trying to model is thus: I have a general type that represents some item in a hypothetical model. The type is abstract and will be inherited by all manner of different model objects, so the model is heterogeneous. Furthermore, some types exist only as children of other types. Objects are to be given an identifier, but the scope of uniqueness of this identifier varies. Some objects - we will call them P (for Parent) objects - must have a globally unique identifier. This is straightforward and can use the xs:key schema element. Other objects (we can call them C objects, for Child) are children of a P object and must have an identifier that is unique only in the scope of that parent. For example, object P1 has two children, object C1 and C2, and object P2 has one child, object C3. In this system, the identifiers given could be as follows: P1: 1 (1st P object globally) P2: 2 (2nd P object globally) C1: 1 (1st C object of P1) C2: 2 (2nd C object of P1) C3: 1 (1st C object of P2) I want the identity syntax of every model object to be identical if possible, therefore my first pass at implementing is to define a single type: <xs:complexType name="ModelElement"> <xs:attribute name="IDMode" type="IdentityMode"/> <xs:attribute name="Identifier" type="xs:string"/> </xs:complexType> where the IdentityMode is an enumerated value: <xs:simpleType name="IdentityMode"> <xs:restriction base="xs:string"> <xs:enumeration value="Identified"/> <xs:enumeration value="Indexed"/> <xs:enumeration value="None"/> </xs:restriction> </xs:simpleType> Here "Identified" signifies a global identifier, and "Indexed" indicates an identifier local only to the parent. My question is, how do I enforce these uniqueness conditions using unique, key or other schema elements based on the IdentityMode property of the given subtype of ModelElement?

    Read the article

  • ASP.Net MVC - Models and User Controls

    - by cdotlister
    Hi guys, I have a View with a Master Page. The user control makes use of a Model: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<BudgieMoneySite.Models.SiteUserLoginModel>" %> This user control is shown on all screens (Part of the Master Page). If the user is logged in, it shows a certain text, and if the user isn't logged in, it offers a login box. That is working OK. Now, I am adding my first functional screen. So I created a new view... and, well, i generated the basic view code for me when I selected the controller method, and said 'Create View'. My Controller has this code: public ActionResult Transactions() { List<AccountTransactionDetails> trans = GetTransactions(); return View(trans); } private List<AccountTransactionDetails> GetTransactions() { List<AccountTransactionDto> trans = Services.TransactionServices.GetTransactions(); List<AccountTransactionDetails> reply = new List<AccountTransactionDetails>(); foreach(var t in trans) { AccountTransactionDetails a = new AccountTransactionDetails(); foreach (var line in a.Transactions) { AccountTransactionLine l = new AccountTransactionLine(); l.Amount = line.Amount; l.SubCategory = line.SubCategory; l.SubCategoryId = line.SubCategoryId; a.Transactions.Add(l); } reply.Add(a); } return reply; } So, my view was generated with this: <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<System.Collections.Generic.List<BudgieMoneySite.Models.AccountTransactionDetails>>" %> Found <%=Model.Count() % Transactions. All I want to show for now is the number of records I will be displaying. When I run it, I get an error: "The model item passed into the dictionary is of type 'System.Collections.Generic.List`1[BudgieMoneySite.Models.AccountTransactionDetails]', but this dictionary requires a model item of type 'BudgieMoneySite.Models.SiteUserLoginModel'." It looks like the user control is being rendered first, and as the Model from the controller is my List<, it's breaking! What am I doing wrong?

    Read the article

  • json data not rendered in backbone view

    - by user2535706
    I have been trying to render the json data to the view by calling the rest api and the code is as follows: var Profile = Backbone.Model.extend({ dataType:'jsonp', defaults: { intuitId: null, email: null, type: null }, }); var ProfileList = Backbone.Collection.extend({ model: Profile, url: '/v1/entities/6414256167329108895' }); var ProfileView = Backbone.View.extend({ el: "#profiles", template: _.template($('#profileTemplate').html()), render: function() { _.each(this.model.models, function(profile) { var profileTemplate = this.template(this.model.toJSON()); $(this.el).append(tprofileTemplate); }, this); return this; } }); var profiles = new ProfileList(); var profilesView = new ProfileView({model: profiles}); profiles.fetch(); profilesView.render(); and the html file is as follows: <!DOCTYPE html> <html> <head> <title>SPA Example</title> <!-- <link rel="stylesheet" type="text/css" href="src/css/reset.css" /> <link rel="stylesheet" type="text/css" href="src/css/harmony_compiled.css" /> --> </head> <body class="harmony"> <header> <div class="title">SPA Example</div> </header> <div id="profiles"></div> <script id="profileTemplate" type="text/template"> <div class="profile"> <div class="info"> <div class="intuitId"> <%= intuitId %> </div> <div class="email"> <%= email %> </div> <div class="type"> <%= type %> </div> </div> </div> </script> </body> </html> This gives me an error and the render function isn't invoking properly and the render function is called even before the REST API returns the JSON response. Could anyone please help me to figure out where I went wrong. Any help is highly appreciated Thank you

    Read the article

  • My store returns no code id and breaks 404 error. Magento

    - by numerical25
    I know what the issue is but I dont know how to fix it. I just migrated my magento store locally and I guess possibly some data may have been lost when transferring the DB. the DB is very large. Anyhow, when I login to my admin page, I get a 404 error, page was not found. I debugged the issue and got down to the wire. The exception is thrown in Mage/Core/Model/App.php. Line 759 to be exacted. The following is a snippet. Mage/Core/Model/App.php if (empty($this->_stores[$id])) { $store = Mage::getModel('core/store'); /* @var $store Mage_Core_Model_Store */ if (is_numeric($id)) { $store->load($id); // THIS ID IS FROM Mage_Core_Model_App::ADMIN_STORE_ID and its empty which causes the error } elseif (is_string($id)) { $store->load($id, 'code'); } if (!$store->getCode()) { // RETURNS FALSE HERE BECAUSE NO ID Specified $this->throwStoreException(); } $this->_stores[$store->getStoreId()] = $store; $this->_stores[$store->getCode()] = $store; } The store returns null because $id is null so it therefore does not load any model which explains why it returns false when calling getCode() [EDIT] If you want clarification, please ask for more before voting my post down. Remember I am still trying to get help not get neglected. I am using Version 1.4.1.1. When I type in the URL for admin, I get a 404 page. I walked through the code thouroughly and found that the Model MAGE_CORE_MODEL_STORE::getCode(); Returns Null which triggers the exception. and ends the script. I do not have any other detail. I further troubleshooted the issue by checking the database and that is what the screen shot is. Showing that there is infact data in the Code Colunn. So my question is why is the Model returning a empty column when the column clearly has a value. What can I do to further troubleshoot and figure out why its not working [EDIT UPDATE NEW] I did some research. the reason its returning NULL is because the store ID is null being passed Mage::getStoreConfigFlag('web/secure/use_in_adminhtml', Mage_Core_Model_App::ADMIN_STORE_ID); // THIS IS THE ID being specified Mage_Core_Model_App::ADMIN_STORE_ID has no value in it, so this method throws the exception. Not sure why how to fix this.

    Read the article

  • MySqlConnection really not close

    - by stighy
    Hi guys, i've a huge problem. Take a look to this sample code private sub FUNCTION1() dim conn as new mysqlconnection(myconnstring) conn.open ---- do something FUNCTION2() ---- conn.close end sub private sub FUNCTION2() dim conn as new mysqlconnection(myconnstring) .... conn.open -- do something conn.close end sub Despite i close regulary all my connection, they remains "open" on the mysql server. I know this because i'm using MySQL Administration tool to check how many connection i open "line by line" of my source code. In fact i get a lot of time "user has exceeded the 'max_user_connections' resource (current value: 5)" My hoster permit ONLY 5 connection, but i think that if i write GOOD source code, this cannot be a problem. So my question is: why that "damn" connections remain open ?? Thank you in advance!

    Read the article

  • Proliant server will not accept new hard disks in RAID 1+0?

    - by Leigh
    I have a HP ProLiant DL380 G5, I have two logical drives configured with RAID. I have one logical drive RAID 1+0 with two 72 gb 10k sas 1 port spare no 376597-001. I had one hard disk fail and ordered a replacement. The configuration utility showed error and would not rebuild the RAID. I presumed a hard disk fault and ordered a replacement again. In the mean time I put the original failed disk back in the server and this started rebuilding. Currently shows ok status however in the log I can see hardware errors. The new disk has come and I again have the same problem of not accepting the hard disk. I have updated the P400 controller with the latest firmware 7.24 , but still no luck. The only difference I can see is the original drive has firmware 0103 (same as the RAID drive) and the new one has HPD2. Any advice would be appreciated. Thanks in advance Logs from server ctrl all show config Smart Array P400 in Slot 1 (sn: PAFGK0P9VWO0UQ) array A (SAS, Unused Space: 0 MB) logicaldrive 1 (68.5 GB, RAID 1, Interim Recovery Mode) physicaldrive 2I:1:1 (port 2I:box 1:bay 1, SAS, 73.5 GB, OK) physicaldrive 2I:1:2 (port 2I:box 1:bay 2, SAS, 72 GB, Failed array B (SAS, Unused Space: 0 MB) logicaldrive 2 (558.7 GB, RAID 5, OK) physicaldrive 1I:1:5 (port 1I:box 1:bay 5, SAS, 300 GB, OK) physicaldrive 2I:1:3 (port 2I:box 1:bay 3, SAS, 300 GB, OK) physicaldrive 2I:1:4 (port 2I:box 1:bay 4, SAS, 300 GB, OK) ctrl all show config detail Smart Array P400 in Slot 1 Bus Interface: PCI Slot: 1 Serial Number: PAFGK0P9VWO0UQ Cache Serial Number: PA82C0J9VWL8I7 RAID 6 (ADG) Status: Disabled Controller Status: OK Hardware Revision: E Firmware Version: 7.24 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Scan Mode: Idle Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 0 secs Cache Board Present: True Cache Status: OK Cache Status Details: A cache error was detected. Run more information. Cache Ratio: 100% Read / 0% Write Drive Write Cache: Disabled Total Cache Size: 256 MB Total Cache Memory Available: 208 MB No-Battery Write Cache: Disabled Battery/Capacitor Count: 0 SATA NCQ Supported: True Array: A Interface Type: SAS Unused Space: 0 MB Status: Failed Physical Drive Array Type: Data One of the drives on this array have failed or has Logical Drive: 1 Size: 68.5 GB Fault Tolerance: RAID 1 Heads: 255 Sectors Per Track: 32 Cylinders: 17594 Strip Size: 128 KB Full Stripe Size: 128 KB Status: Interim Recovery Mode Caching: Enabled Unique Identifier: 600508B10010503956574F305551 Disk Name: \\.\PhysicalDrive0 Mount Points: C:\ 68.5 GB Logical Drive Label: A0100539PAFGK0P9VWO0UQ0E93 Mirror Group 0: physicaldrive 2I:1:2 (port 2I:box 1:bay 2, S Mirror Group 1: physicaldrive 2I:1:1 (port 2I:box 1:bay 1, S Drive Type: Data physicaldrive 2I:1:1 Port: 2I Box: 1 Bay: 1 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 73.5 GB Rotational Speed: 10000 Firmware Revision: 0103 Serial Number: B379P8C006RK Model: HP DG072A9B7 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 2I:1:2 Port: 2I Box: 1 Bay: 2 Status: Failed Drive Type: Data Drive Interface Type: SAS Size: 72 GB Rotational Speed: 15000 Firmware Revision: HPD9 Serial Number: D5A1PCA04SL01244 Model: HP EH0072FARUA PHY Count: 2 PHY Transfer Rate: Unknown, Unknown Array: B Interface Type: SAS Unused Space: 0 MB Status: OK Array Type: Data Logical Drive: 2 Size: 558.7 GB Fault Tolerance: RAID 5 Heads: 255 Sectors Per Track: 32 Cylinders: 65535 Strip Size: 64 KB Full Stripe Size: 128 KB Status: OK Caching: Enabled Parity Initialization Status: Initialization Co Unique Identifier: 600508B10010503956574F305551 Disk Name: \\.\PhysicalDrive1 Mount Points: E:\ 558.7 GB Logical Drive Label: AF14FD12PAFGK0P9VWO0UQD007 Drive Type: Data physicaldrive 1I:1:5 Port: 1I Box: 1 Bay: 5 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 300 GB Rotational Speed: 10000 Firmware Revision: HPD4 Serial Number: 3SE07QH300009923X1X3 Model: HP DG0300BALVP Current Temperature (C): 32 Maximum Temperature (C): 45 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 2I:1:3 Port: 2I Box: 1 Bay: 3 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 300 GB Rotational Speed: 10000 Firmware Revision: HPD4 Serial Number: 3SE0AHVH00009924P8F3 Model: HP DG0300BALVP Current Temperature (C): 34 Maximum Temperature (C): 47 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 2I:1:4 Port: 2I Box: 1 Bay: 4 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 300 GB Rotational Speed: 10000 Firmware Revision: HPD4 Serial Number: 3SE08NAK00009924KWD6 Model: HP DG0300BALVP Current Temperature (C): 35 Maximum Temperature (C): 47 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown

    Read the article

  • SQL SERVER – List of All the Samples Database Available to Download for FREE

    - by Pinal Dave
    It is pretty much very common to have a sample database for any database product. Different companies keep on improving their product and keep on coming up with innovation in their product. To demonstrate the capability of their new enhancements they need the sample database. Microsoft have various sample database available for free download for their SQL Server Product. I have collected them here in a single blog post. Download an AdventureWorks Database The AdventureWorks OLTP database supports standard online transaction processing scenarios for a fictitious bicycle manufacturer (Adventure Works Cycles). Scenarios include Manufacturing, Sales, Purchasing, Product Management, Contact Management, and Human Resources. Coconut Dal Coconut Dal is a lightweight data access layer, for use in projects where the Entity Framework cannot be used or Microsoft’s Enterprise Library Data Block is unsuitable. Anyone who is handwriting ADO.NET should use a library instead and Coconut Dal might be the answer.  DataBooster – Extension to ADO.NET Data Provider The dbParallel DataBooster library is a high-performance extension to ADO.NET Data Provider, includes two aspects: 1) A slimmed down API encapsulation which simplified the most common data access operations (DbConnection -> DbCommand -> DbParameter -> DbDataReader) into a single class DbAccess, to help application with a clean DAL, avoid over-packing and redundant-copy of data transfer. 2) A booster for writing mass data onto database. Base on a rational utilization of database concurrency and a effective utilization of network bandwidth. Tabular AMO 2012 The sample is made of two project parts. The first part is a library of functions to manage tabular models -AMO2Tabular V2-. The second part is a sample to build a tabular model -AdventureWorks Tabular AMO 2012- using the AMO2Tabular library; the created model is similar to the ‘AdventureWorks Tabular Model 2012. SQL Server Analysis Services Product Samples SQL Server Analysis Services provides, a unified and integrated view of all your business data as the foundation for all of your traditional reporting, online analytical processing (OLAP) analysis, Key Performance Indicator (KPI) scorecards, and data mining. Analysis Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples you will also need to download the AdventureWorks family of databases. SQL Server Reporting Services Product Samples This project contains Reporting Services samples released with Microsoft SQL Server product. These samples are in the following five categories: Application Samples, Extension Samples, Model Samples, Report Samples, and Script Samples. If you are interested in contributing Reporting Services samples, please let us know by posting in the developers’ forum. Reporting Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008 R2 PCU1. For many of these samples you will also need to download the AdventureWorks family of databases. SQL Server Integration Services Product Samples This project contains Integration Services samples released with Microsoft SQL Server product. These samples are in the following two categories: Package Samples and Programming Samples. If you are interested in contributing Integration Services samples, please let us know by posting in the developers’ forum. Integration Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples you will also need to download the AdventureWorks family of databases. Windows Azure SQL Reporting Admin Sample The SQLReportingAdmin sample for Windows Azure SQL Reporting demonstrates the usage of SQL Reporting APIs, and manages (add/update/delete) permissions of SQL Reporting users. Windows Azure SQL Reporting ReportViewer-SOAP API usage sample These sample projects demonstrate how to embed a Microsoft ReportViewer control that points to reports hosted on SQL Reporting report servers and how to use SQL Reporting SOAP APIs in your Windows Azure Web application. Enterprise Library 5.0 – Integration Pack for Windows Azure This NuGet package contains a zip file with the source code for the Enterprise Library Integration Pack for Windows Azure.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Sample Database

    Read the article

  • Major Analyst Report Chooses Oracle As An ECM Leader

    - by brian.dirking(at)oracle.com
    Oracle announced that Gartner, Inc. has named Oracle as a Leader in its latest "Magic Quadrant for Enterprise Content Management" in a press release issued this morning. Gartner's Magic Quadrant reports position vendors within a particular quadrant based on their completeness of vision and ability to execute. According to Gartner, "Leaders have the highest combined scores for Ability to Execute and Completeness of Vision. They are doing well and are prepared for the future with a clearly articulated vision. In the context of ECM, they have strong channel partners, presence in multiple regions, consistent financial performance, broad platform support and good customer support. In addition, they dominate in one or more technology or vertical market. Leaders deliver a suite that addresses market demand for direct delivery of the majority of core components, though these are not necessarily owned by them, tightly integrated, unique or best-of-breed in each area. We place more emphasis this year on demonstrated enterprise deployments; integration with other business applications and content repositories; incorporation of Web 2.0 and XML capabilities; and vertical-process and horizontal-solution focus. Leaders should drive market transformation." "To extend content governance and best practices across the enterprise, organizations need an enterprise content management solution that delivers a broad set of functionality and is tightly integrated with business processes," said Andy MacMillan, vice president, Product Management, Oracle. "We believe that Oracle's position as a Leader in this report is recognition of the industry-leading performance, integration and scalability delivered in Oracle Enterprise Content Management Suite 11g." With Oracle Enterprise Content Management Suite 11g, Oracle offers a comprehensive, integrated and high-performance content management solution that helps organizations increase efficiency, reduce costs and improve content security. In the report, Oracle is grouped among the top three vendors for execution, and is the furthest to the right, placing Oracle as the most visionary vendor. This vision stems from Oracle's integration of content management right into key business processes, delivering content in context as people need it. Using a PeopleSoft Accounts Payable user as an example, as an employee processes an invoice, Oracle ECM Suite brings that invoice up on the screen so the processor can verify the content right in the process, improving speed and accuracy. Oracle integrates content into business processes such as Human Resources, Travel and Expense, and others, in the major enterprise applications such as PeopleSoft, JD Edwards, Siebel, and E-Business Suite. As part of Oracle's Enterprise Application Documents strategy, you can see an example of these integrations in this webinar: Managing Customer Documents and Marketing Assets in Siebel. You can also get a white paper of the ROI Embry Riddle achieved using Oracle Content Management integrated with enterprise applications. Embry Riddle moved from a point solution for content management on accounts payable to an infrastructure investment - they are now using Oracle Content Management for accounts payable with Oracle E-Business Suite, and for student on-boarding with PeopleSoft e-Campus. They continue to expand their use of Oracle Content Management to address further use cases from a core infrastructure. Oracle also shows its vision in the ability to deliver content optimized for online channels. Marketers can use Oracle ECM Suite to deliver digital assets and offers as part of an integrated campaign that understands website visitors and ensures that they are given the most pertinent information and offers. Oracle also provides full lifecycle management through its built-in records management. Companies are able to manage the lifecycle of content (both records and non-records) through built-in retention management. And with the integration of Oracle ECM Suite and Sun Storage Archive Manager, content can be routed to the appropriate storage media based upon content type, usage data or other business rules. This ensures that the most accessed content is instantly available, and archived content is stored on a more appropriate medium like tape. You can learn more in this webinar - Oracle Content Management and Sun Tiered Storage. If you are interested in reading more about why Oracle was chosen as a Leader, view the Gartner Magic Quadrant for Enterprise Content Management.

    Read the article

< Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >