Search Results

Search found 17847 results on 714 pages for 'virtual disk'.

Page 300/714 | < Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >

  • Problem with URL encoded parameters in URL view helper

    - by Richard Knop
    So my problem is kinda weird, it only occurs when I test the application offline (on my PC with WampServer), the same code works 100% correctly online. Here is how I use the helper (just example): <a href="<?php echo $this->url(array('module' => 'admin', 'controller' => 'index', 'action' => 'approve-photo', 'id' => $this->escape($a->id), 'ret' => urlencode('admin/index/photos')), null, true); ?>" class="blue">approve</a> Online, this link works great, it goes to the action which looks similar to this: public function approvePhotoAction() { $request = $this->getRequest(); $photos = $this->_getTable('Photos'); $dbTrans = false; try { $db = $this->_getDb(); $dbTrans = $db->beginTransaction(); $photos->edit($request->getParam('id'), array('status' => 'approved')); $db->commit(); } catch (Exception $e) { if (true === $dbTrans) { $db->rollBack(); } } $this->_redirect(urldecode($request->getParam('ret'))); } So online, it approves the photo and redirects back to the URL that is encoded as "ret" param in the URL ("admin/index/photos"). But offline with WampServer I click on the link and get 404 error like this: Not Found The requested URL /public/admin/index/approve-photo/id/1/ret/admin/index/photos was not found on this server. What the hell? I'm using the latest version of WampServer (WampServer 2.0i [11/07/09]). Everything else works. Here is my .htaccess file, just in case: RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ /index.php [NC,L] # Turn off magic quotes #php_flag magic_quotes_gpc off I'm using virtual hosts to test ZF projects on my local PC. Here is how I add virtual hosts. httpd.conf: NameVirtualHost *:80 <VirtualHost *:80> ServerName myproject DocumentRoot "C:\wamp\www\myproject" </VirtualHost> the hosts file: 127.0.0.1 myproject Any help would be appreciated because this makes testing and debugging projects on my localhost a nightmare and almost impossible task. I have to upload everything online to check if it works :( UPDATE: Source code of the _redirect helper (built in ZF helper): /** * Set redirect in response object * * @return void */ protected function _redirect($url) { if ($this->getUseAbsoluteUri() && !preg_match('#^(https?|ftp)://#', $url)) { $host = (isset($_SERVER['HTTP_HOST'])?$_SERVER['HTTP_HOST']:''); $proto = (isset($_SERVER['HTTPS'])&&$_SERVER['HTTPS']!=="off") ? 'https' : 'http'; $port = (isset($_SERVER['SERVER_PORT'])?$_SERVER['SERVER_PORT']:80); $uri = $proto . '://' . $host; if ((('http' == $proto) && (80 != $port)) || (('https' == $proto) && (443 != $port))) { $uri .= ':' . $port; } $url = $uri . '/' . ltrim($url, '/'); } $this->_redirectUrl = $url; $this->getResponse()->setRedirect($url, $this->getCode()); } UPDATE 2: Output of the helper offline: /admin/index/approve-photo/id/1/ret/admin%252Findex%252Fphotos And online (the same): /admin/index/approve-photo/id/1/ret/admin%252Findex%252Fphotos UPDATE 3: OK. The problem was actually with the virtual host configuration. The document root was set to C:\wamp\www\myproject instead of C:\wamp\www\myproject\public. And I was using this htaccess to redirect to the public folder: RewriteEngine On php_value upload_max_filesize 15M php_value post_max_size 15M php_value max_execution_time 200 php_value max_input_time 200 # Exclude some directories from URI rewriting #RewriteRule ^(dir1|dir2|dir3) - [L] RewriteRule ^\.htaccess$ - [F] RewriteCond %{REQUEST_URI} ="" RewriteRule ^.*$ /public/index.php [NC,L] RewriteCond %{REQUEST_URI} !^/public/.*$ RewriteRule ^(.*)$ /public/$1 RewriteCond %{REQUEST_FILENAME} -f RewriteRule ^.*$ - [NC,L] RewriteRule ^public/.*$ /public/index.php [NC,L] Damn it I don't know why I forgot about this, I thought the virtual host was configured correctly to the public folder, I was 100% sure about that, I also double checked it and saw no problem (wtf? am I blind?).

    Read the article

  • Fluent NHibernate - subclasses with shared reference

    - by ollie
    Edit: changed class names. I'm using Fluent NHibernate (v 1.0.0.614) automapping on the following set of classes (where Entity is the base class provided in the S#arp Architecture framework): public class Car : Entity { public virtual int ModelYear { get; set; } public virtual Company Manufacturer { get; set; } } public class Sedan : Car { public virtual bool WonSedanOfYear { get; set; } } public class Company : Entity { public virtual IList<Sedan> Sedans { get; set; } } This results in the following Configuration (as written to hbm.xml): <class name="Company" table="Companies"> <id name="Id" type="System.Int32" unsaved-value="0"> <column name="`ID`" /> <generator class="identity" /> </id> <bag cascade="all" inverse="true" name="Sedans" mutable="true"> <key> <column name="`CompanyID`" /> </key> <one-to-many class="Sedan" /> </bag> </class> <class name="Car" table="Cars"> <id name="Id" type="System.Int32" unsaved-value="0"> <column name="`ID`" /> <generator class="identity" /> </id> <property name="ModelYear" type="System.Int32"> <column name="`ModelYear`" /> </property> <many-to-one cascade="save-update" class="Company" name="Manufacturer"> <column name="`CompanyID`" /> </many-to-one> <joined-subclass name="Sedan"> <key> <column name="`CarID`" /> </key> <property name="WonSedanOfYear" type="System.Boolean"> <column name="`WonSedanOfYear`" /> </property> </joined-subclass> </class> So far so good! But now comes the ugly part. The generated database tables: Table: Companies Columns: ID (PK, int, not null) Table: Cars Columns: ID (PK, int, not null) ModelYear (int, null) CompanyID (FK, int, null) Table: Sedan Columns: CarID (PK, FK, int, not null) WonSedanOfYear (bit, null) CompanyID (FK, int, null) Instead of one FK for Company, I get two! How can I ensure I only get one FK for Company? Override the automapping? Put a convention in place? Or is this a bug? Your thoughts are appreciated.

    Read the article

  • How do I solve an unresolved external when using C++ Builder packages?

    - by David M
    I'm experimenting with reconfiguring my application to make heaving use of packages. Both I and another developer running a similar experiment are running into a bit of trouble when linking using several different packages. We're probably both doing something wrong, but goodness knows what :) The situation is this: The first package, PackageA.bpl, contains C++ class FooA. The class is declared with the PACKAGE directive. The second package, PackageB.bpl, contains a class inheriting from FooA, called FooB. It includes FooB.h, and the package is built using runtime packages, and links to PackageA by adding a reference to PackageA.bpi. When building PackageB, it compiles fine but linking fails with a number of unresolved externals, the first few of which are: [ILINK32 Error] Error: Unresolved external '__tpdsc__ FooA' referenced from C:\blah\FooB.OBJ [ILINK32 Error] Error: Unresolved external 'FooA::' referenced from C:\blah\FooB.OBJ [ILINK32 Error] Error: Unresolved external '__fastcall FooA::~FooA()' referenced from blah\FooB.OBJ etc. Running TDump on PackageA.bpl shows: Exports from PackageA.bpl 14 exported name(s), 14 export addresse(s). Ordinal base is 1. Sorted by Name: RVA Ord. Hint Name -------- ---- ---- ---- 00002A0C 8 0000 __tpdsc__ FooA 00002AD8 10 0001 __linkproc__ FooA::Finalize 00002AC8 9 0002 __linkproc__ FooA::Initialize 00002E4C 12 0003 __linkproc__ PackageA::Finalize 00002E3C 11 0004 __linkproc__ PackageA::Initialize 00006510 14 0007 FooA:: 00002860 5 0008 FooA::FooA(FooA&) 000027E4 4 0009 FooA::FooA() 00002770 3 000A __fastcall FooA::~FooA() 000028DC 6 000B __fastcall FooA::Method1() const 000028F4 7 000C __fastcall FooA::Method2() const 00001375 2 000D Finalize 00001368 1 000E Initialize 0000610C 13 000F ___CPPdebugHook So the class definitely seems to be exported and available to link. I can see entries for the specific things ILink32 says it's looking for and not finding. Running TDump on the BPI file shows similar entries. Other info The class does descend from TObject, though originally before refactoring into packages it was a normal C++ class. (More detail below. It seems "safer" using VCL-style classes when trying to solve problems with a very Delphi-ish thing like this anyway. Changing this only changes the order of unresolved externals to first not find Method1 and Method2, then others.) Declaration for FooA: class PACKAGE FooA: public TObject { public: FooA(); virtual __fastcall ~FooA(); FooA(const FooA&); virtual __fastcall long Method1() const; virtual __fastcall long Method2() const; }; and FooB: class FooB: public FooA { public: FooB(); virtual __fastcall ~FooB(); ... other methods... }; All methods definitely are implemented in the .cpp files, so it's not not finding them because they don't exist! The .cpp files also contain #pragma package(smart_init) near the top, under the includes. Questions that might help... Are packages reliable using C++, or are they only useable with Delphi code? Is linking to the first package by adding a reference to its BPI correct - is that how you're supposed to do it? I could use a LIB but it seems to make the second package much larger, and I suspect it's statically linking in the contents of the first. Can we use the PACKAGE directive only on TObject-derived classes? There is no compiler warning using it on standard C++ classes. Is splitting code into packages the best way to achieve the goal of isolating code and communicating through defined layers / interfaces? I've been investigating this path because it seems to be the C++Builder / Delphi Way, and if it worked it looks attractive. But are there better alternatives? I'm very new to using packages and have only known about them through using components before. Any general words of advice would be great! We're using C++Builder 2010. I've fabricated the class and method names in the above code examples, but other than that the details are exactly what we're seeing. Cheers, David

    Read the article

  • C#: Inheritence, Overriding, and Hiding

    - by Rosarch
    I'm having difficulty with an architectural decision for my C# XNA game. The basic entity in the world, such as a tree, zombie, or the player, is represented as a GameObject. Each GameObject is composed of at least a GameObjectController, GameObjectModel, and GameObjectView. These three are enough for simple entities, like inanimate trees or rocks. However, as I try to keep the functionality as factored out as possible, the inheritance begins to feel unwieldy. Syntactically, I'm not even sure how best to accomplish my goals. Here is the GameObjectController: public class GameObjectController { protected GameObjectModel model; protected GameObjectView view; public GameObjectController(GameObjectManager gameObjectManager) { this.gameObjectManager = gameObjectManager; model = new GameObjectModel(this); view = new GameObjectView(this); } public GameObjectManager GameObjectManager { get { return gameObjectManager; } } public virtual GameObjectView View { get { return view; } } public virtual GameObjectModel Model { get { return model; } } public virtual void Update(long tick) { } } I want to specify that each subclass of GameObjectController will have accessible at least a GameObjectView and GameObjectModel. If subclasses are fine using those classes, but perhaps are overriding for a more sophisticated Update() method, I don't want them to have to duplicate the code to produce those dependencies. So, the GameObjectController constructor sets those objects up. However, some objects do want to override the model and view. This is where the trouble comes in. Some objects need to fight, so they are CombatantGameObjects: public class CombatantGameObject : GameObjectController { protected new readonly CombatantGameModel model; public new virtual CombatantGameModel Model { get { return model; } } protected readonly CombatEngine combatEngine; public CombatantGameObject(GameObjectManager gameObjectManager, CombatEngine combatEngine) : base(gameObjectManager) { model = new CombatantGameModel(this); this.combatEngine = combatEngine; } public override void Update(long tick) { if (model.Health <= 0) { gameObjectManager.RemoveFromWorld(this); } base.Update(tick); } } Still pretty simple. Is my use of new to hide instance variables correct? Note that I'm assigning CombatantObjectController.model here, even though GameObjectController.Model was already set. And, combatants don't need any special view functionality, so they leave GameObjectController.View alone. Then I get down to the PlayerController, at which a bug is found. public class PlayerController : CombatantGameObject { private readonly IInputReader inputReader; private new readonly PlayerModel model; public new PlayerModel Model { get { return model; } } private float lastInventoryIndexAt; private float lastThrowAt; public PlayerController(GameObjectManager gameObjectManager, IInputReader inputReader, CombatEngine combatEngine) : base(gameObjectManager, combatEngine) { this.inputReader = inputReader; model = new PlayerModel(this); Model.Health = Constants.PLAYER_HEALTH; } public override void Update(long tick) { if (Model.Health <= 0) { gameObjectManager.RemoveFromWorld(this); for (int i = 0; i < 10; i++) { Debug.WriteLine("YOU DEAD SON!!!"); } return; } UpdateFromInput(tick); // .... } } The first time that this line is executed, I get a null reference exception: model.Body.ApplyImpulse(movementImpulse, model.Position); model.Position looks at model.Body, which is null. This is a function that initializes GameObjects before they are deployed into the world: public void Initialize(GameObjectController controller, IDictionary<string, string> data, WorldState worldState) { controller.View.read(data); controller.View.createSpriteAnimations(data, _assets); controller.Model.read(data); SetUpPhysics(controller, worldState, controller.Model.BoundingCircleRadius, Single.Parse(data["x"]), Single.Parse(data["y"]), bool.Parse(data["isBullet"])); } Every object is passed as a GameObjectController. Does that mean that if the object is really a PlayerController, controller.Model will refer to the base's GameObjectModel and not the PlayerController's overriden PlayerObjectModel?

    Read the article

  • Qt Linker Errors

    - by Kyle Rozendo
    Hi All, I've been trying to get Qt working (QCreator, QIde and now VS2008). I have sorted out a ton of issues already, but I am now faced with the following build errors, and frankly I'm out of ideas. Error 1 error LNK2019: unresolved external symbol "public: void __thiscall FileVisitor::processFileList(class QStringList)" (?processFileList@FileVisitor@@QAEXVQStringList@@@Z) referenced in function _main codevisitor-test.obj Question1 Error 2 error LNK2019: unresolved external symbol "public: void __thiscall FileVisitor::processEntry(class QString)" (?processEntry@FileVisitor@@QAEXVQString@@@Z) referenced in function _main codevisitor-test.obj Question1 Error 3 error LNK2019: unresolved external symbol "public: class QString __thiscall ArgumentList::getSwitchArg(class QString,class QString)" (?getSwitchArg@ArgumentList@@QAE?AVQString@@V2@0@Z) referenced in function _main codevisitor-test.obj Question1 Error 4 error LNK2019: unresolved external symbol "public: bool __thiscall ArgumentList::getSwitch(class QString)" (?getSwitch@ArgumentList@@QAE_NVQString@@@Z) referenced in function _main codevisitor-test.obj Question1 Error 5 error LNK2019: unresolved external symbol "public: void __thiscall ArgumentList::argsToStringlist(int,char * * const)" (?argsToStringlist@ArgumentList@@QAEXHQAPAD@Z) referenced in function "public: __thiscall ArgumentList::ArgumentList(int,char * * const)" (??0ArgumentList@@QAE@HQAPAD@Z) codevisitor-test.obj Question1 Error 6 error LNK2019: unresolved external symbol "public: __thiscall FileVisitor::FileVisitor(class QString,bool,bool)" (??0FileVisitor@@QAE@VQString@@_N1@Z) referenced in function "public: __thiscall CodeVisitor::CodeVisitor(class QString,bool)" (??0CodeVisitor@@QAE@VQString@@_N@Z) codevisitor-test.obj Question1 Error 7 error LNK2001: unresolved external symbol "public: virtual struct QMetaObject const * __thiscall FileVisitor::metaObject(void)const " (?metaObject@FileVisitor@@UBEPBUQMetaObject@@XZ) codevisitor-test.obj Question1 Error 8 error LNK2001: unresolved external symbol "public: virtual void * __thiscall FileVisitor::qt_metacast(char const *)" (?qt_metacast@FileVisitor@@UAEPAXPBD@Z) codevisitor-test.obj Question1 Error 9 error LNK2001: unresolved external symbol "public: virtual int __thiscall FileVisitor::qt_metacall(enum QMetaObject::Call,int,void * *)" (?qt_metacall@FileVisitor@@UAEHW4Call@QMetaObject@@HPAPAX@Z) codevisitor-test.obj Question1 Error 10 error LNK2001: unresolved external symbol "protected: virtual bool __thiscall FileVisitor::skipDir(class QDir const &)" (?skipDir@FileVisitor@@MAE_NABVQDir@@@Z) codevisitor-test.obj Question1 Error 11 fatal error LNK1120: 10 unresolved externals ... \Visual Studio 2008\Projects\Assignment1\Question1\Question1\Debug\Question1.exe Question1 The code is as follows: #include "argumentlist.h" #include <codevisitor.h> #include <QDebug> void usage(QString appname) { qDebug() << appname << " Usage: \n" << "codevisitor [-r] [-d startdir] [-f filter] [file-list]\n" << "\t-r \tvisitor will recurse into subdirs\n" << "\t-d startdir\tspecifies starting directory\n" << "\t-f filter\tfilename filter to restrict visits\n" << "\toptional list of files to be visited"; } int main(int argc, char** argv) { ArgumentList al(argc, argv); QString appname = al.takeFirst(); /* app name is always first in the list. */ if (al.count() == 0) { usage(appname); exit(1); } bool recursive(al.getSwitch("-r")); QString startdir(al.getSwitchArg("-d")); QString filter(al.getSwitchArg("-f")); CodeVisitor cvis(filter, recursive); if (startdir != QString()) { cvis.processEntry(startdir); } else if (al.size()) { cvis.processFileList(al); } else return 1; qDebug() << "Files Processed: %d" << cvis.getNumFiles(); qDebug() << cvis.getResultString(); return 0; } Thanks in advance, I'm simply stumped.

    Read the article

  • EF Code first + Delete Child Object from Parent?

    - by ebb
    I have a one-to-many relationship between my table Case and my other table CaseReplies. I'm using EF Code First and now wants to delete a CaseReply from a Case object, however it seems impossible to do such thing because it just tries to remove the CaseId from the specific CaseReply record and not the record itself.. short: Case just removes the relationship between itself and the CaseReply.. it does not delete the CaseReply. My code: // Case.cs (Case Object) public class Case { [Key] public int Id { get; set; } public string Topic { get; set; } public string Message { get; set; } public DateTime Date { get; set; } public Guid UserId { get; set; } public virtual User User { get; set; } public virtual ICollection<CaseReply> Replies { get; set; } } // CaseReply.cs (CaseReply Object) public class CaseReply { [Key] public int Id { get; set; } public string Message { get; set; } public DateTime Date { get; set; } public int CaseId { get; set; } public Guid UserId { get; set; } public virtual User User { get; set; } public virtual Case Case { get; set; } } // RepositoryBase.cs public class RepositoryBase<T> : IRepository<T> where T : class { public IDbContext Context { get; private set; } public IDbSet<T> ObjectSet { get; private set; } public RepositoryBase(IDbContext context) { Contract.Requires(context != null); Context = context; if (context != null) { ObjectSet = Context.CreateDbSet<T>(); if (ObjectSet == null) { throw new InvalidOperationException(); } } } public IRepository<T> Remove(T entity) { ObjectSet.Remove(entity); return this; } public IRepository<T> SaveChanges() { Context.SaveChanges(); return this; } } // CaseRepository.cs public class CaseRepository : RepositoryBase<Case>, ICaseRepository { public CaseRepository(IDbContext context) : base(context) { Contract.Requires(context != null); } public bool RemoveCaseReplyFromCase(int caseId, int caseReplyId) { Case caseToRemoveReplyFrom = ObjectSet.Include(x => x.Replies).FirstOrDefault(x => x.Id == caseId); var delete = caseToRemoveReplyFrom.Replies.FirstOrDefault(x => x.Id == caseReplyId); caseToRemoveReplyFrom.Replies.Remove(delete); return Context.SaveChanges() >= 1; } } Thanks in advance.

    Read the article

  • HQL over ternary map with subcollection

    - by Diego Mijelshon
    I'm stuck with a query I need to write. Given the following model: public class A : Entity<Guid> { public virtual IDictionary<B, C> C { get; set; } } public class B : Entity<Guid> { } public class C : Entity<Guid> { public virtual int Data1 { get; set; } public virtual ICollection<D> D { get; set; } } public class D : Entity<Guid> { public virtual int Data2 { get; set; } } I need to get a list of A instances that have a D containing some data for the specified B (parameter) In the object model, that would be: listOfA.Where(a => a.C[b].D.Any(d => d.Data2 == 0)) But I wasn't able to write a working HQL. I'm able to write something like the following, which filters on C.Data1: from A a where a.C[:b].Data1 = 0 But I'm unable to do anything with the elements of a.C[:b].D (I get various parsing exceptions) Here are the mappings, in case you're interested (nothing special, generated by ConfORM): <class name="A"> <id name="Id" type="Guid"> <generator class="guid.comb" /> </id> <map name="C"> <key column="a_key" /> <map-key-many-to-many class="B" /> <one-to-many class="C" /> </map> </class> <class name="B"> <id name="Id" type="Guid"> <generator class="guid.comb" /> </id> </class> <class name="C"> <id name="Id" type="Guid"> <generator class="guid.comb" /> </id> <property name="Data1" /> <bag name="D"> <key column="c_key" /> <one-to-many class="D" /> </bag> </class> <class name="D"> <id name="Id" type="Guid"> <generator class="guid.comb" /> </id> <property name="Data2" /> </class> Thanks!

    Read the article

  • ListBoxFor not populating with selected items

    - by user576838
    I've see this question asked a couple of other times, and I followed this after I tried things on my own with the MS music store demo to no avail, but I still can't get this to work. I've also noticed when I look at my MultiSelectList object in the viewmodel, it has the correct items in the selected items property, but if I expand the results view, it doesn't have any listboxitem with the selected value. What am I missing here? I feel like I'm taking crazy pills! Thanks in advance. model: public class Article { public int ArticleID { get; set; } public DateTime? DatePurchased { get; set; } public DateTime? LastWorn { get; set; } public string ThumbnailImg { get; set; } public string LargeImg { get; set; } public virtual List<Outfit> Outfits { get; set; } public virtual List<Tag> Tags { get; set; } } viewmodel: public class ArticleViewModel { public int ArticleID { get; set; } public List<Tag> Tags { get; set; } public MultiSelectList mslTags { get; set; } public virtual Article Article { get; set; } public ArticleViewModel(int ArticleID) { using (ctContext db = new ctContext()) { this.Article = db.Articles.Find(ArticleID); this.Tags = db.Tags.ToList(); this.mslTags = new MultiSelectList(this.Tags, "TagID", "Name", this.Article.Tags); } } } controller: public ActionResult Index() { ArticleIndexViewModel vm = new ArticleIndexViewModel(db); return View(vm); } view: @model ClosetTracker.ArticleViewModel @using (Html.BeginForm()) { <img id="bigImg" src="@Model.Article.ThumbnailImg" alt="img" /> @Html.HiddenFor(m => m.ArticleID); @Html.LabelFor(m => m.Article.Tags) @* @Html.ListBoxFor(m => m.Article.Tags, Model.Tags.Select(t => new SelectListItem { Text = t.Name, Value = t.TagID.ToString() }), new { Multiple = "multiple" }) *@ @Html.ListBoxFor(m => m.Article.Tags, Model.mslTags); @Html.LabelFor(m => m.Article.LastWorn) @Html.TextBoxFor(m => m.Article.LastWorn, new { @class = "datepicker" }) @Html.LabelFor(m => m.Article.DatePurchased) @Html.TextBoxFor(m => m.Article.DatePurchased, new { @class = "datepicker" }) <p> <input type="submit" value="Save" /> </p> } EDITED Ok, I changed around the constructor of the MultiSelectList to have a list of TagID in the selected value arg instead of a list of Tag objects. This shows the correct tags as selected in the results view when I watch the mslTags object in debug mode. However, it still isn't rendering correctly to the page. public class ArticleViewModel { public int ArticleID { get; set; } public List<Tag> Tags { get; set; } public MultiSelectList mslTags { get; set; } public virtual Article Article { get; set; } public ArticleViewModel(int ArticleID) { using (ctContext db = new ctContext()) { this.Article = db.Articles.Find(ArticleID); this.Tags = db.Tags.ToList(); this.mslTags = new MultiSelectList(this.Tags, "TagID", "Name", this.Article.Tags.Select(t => t.TagID).ToList()); } } }

    Read the article

  • Many to many self join through junction table

    - by Peter
    I have an EF model that can self-reference through an intermediary class to define a parent/child relationship. I know how to do a pure many-to-many relationship using the Map command, but for some reason going through this intermediary class is causing problems with my mappings. The intermediary class provides additional properties for the relationship. See the classes, modelBinder logic and error below: public class Equipment { [Key] public int EquipmentId { get; set; } public virtual List<ChildRecord> Parents { get; set; } public virtual List<ChildRecord> Children { get; set; } } public class ChildRecord { [Key] public int ChildId { get; set; } [Required] public int Quantity { get; set; } [Required] public Equipment Parent { get; set; } [Required] public Equipment Child { get; set; } } I've tried building the mappings in both directions, though I only keep one set in at a time: modelBuilder.Entity<ChildRecord>() .HasRequired(x => x.Parent) .WithMany(x => x.Children ) .WillCascadeOnDelete(false); modelBuilder.Entity<ChildRecord>() .HasRequired(x => x.Child) .WithMany(x => x.Parents) .WillCascadeOnDelete(false); OR modelBuilder.Entity<Equipment>() .HasMany(x => x.Parents) .WithRequired(x => x.Child) .WillCascadeOnDelete(false); modelBuilder.Entity<Equipment>() .HasMany(x => x.Children) .WithRequired(x => x.Parent) .WillCascadeOnDelete(false); Regardless of which set I use, I get the error: The foreign key component 'Child' is not a declared property on type 'ChildRecord'. Verify that it has not been explicitly excluded from the model and that it is a valid primitive property. when I try do deploy my ef model to the database. If I build it without the modelBinder logic in place then I get two ID columns for Child and two ID columns for Parent in my ChildRecord table. This makes sense since it tries to auto create the navigation properties from Equipment and doesn't know that there are already properties in ChildRecord to fulfill this need. I tried using Data Annotations on the class, and no modelBuilder code, this failed with the same error as above: [Required] [ForeignKey("EquipmentId")] public Equipment Parent { get; set; } [Required] [ForeignKey("EquipmentId")] public Equipment Child { get; set; } AND [InverseProperty("Child")] public virtual List<ChildRecord> Parents { get; set; } [InverseProperty("Parent")] public virtual List<ChildRecord> Children { get; set; } I've looked at various other answers around the internet/SO, and the common difference seems to be that I am self joining where as all the answers I can find are for two different types. Entity Framework Code First Many to Many Setup For Existing Tables Many to many relationship with junction table in Entity Framework? Creating many to many junction table in Entity Framework

    Read the article

  • Can anyone explain why my crypto++ decrypted file 16 bytes short?

    - by Tom Williams
    I suspect it might be too much to hope for, but can anyone with experience with crypto++ explain why the "decrypted.out" file created by main() is 16 characters short (which probably not coincidentally is the block size)? I think the issue must be in CryptStreamBuffer::GetNextChar(), but I've been staring at it and the crypto++ documentation for hours. Any other comments about how crummy or naive my std::streambuf implementation are also welcome ;-) And I've just noticed I'm missing some calls to delete so you don't have to tell me about those. Thanks, Tom // Runtime Includes #include <iostream> // Crypto++ Includes #include "aes.h" #include "modes.h" // xxx_Mode< > #include "filters.h" // StringSource and // StreamTransformation #include "files.h" using namespace std; class CryptStreamBuffer: public std::streambuf { public: CryptStreamBuffer(istream& encryptedInput, CryptoPP::StreamTransformation& c); CryptStreamBuffer(ostream& encryptedOutput, CryptoPP::StreamTransformation& c); protected: virtual int_type overflow(int_type ch = traits_type::eof()); virtual int_type uflow(); virtual int_type underflow(); virtual int_type pbackfail(int_type ch); virtual int sync(); private: int GetNextChar(); int m_NextChar; // Buffered character CryptoPP::StreamTransformationFilter* m_StreamTransformationFilter; CryptoPP::FileSource* m_Source; CryptoPP::FileSink* m_Sink; }; // class CryptStreamBuffer CryptStreamBuffer::CryptStreamBuffer(istream& encryptedInput, CryptoPP::StreamTransformation& c) : m_NextChar(traits_type::eof()), m_StreamTransformationFilter(0), m_Source(0), m_Sink(0) { m_StreamTransformationFilter = new CryptoPP::StreamTransformationFilter(c); m_Source = new CryptoPP::FileSource(encryptedInput, false, m_StreamTransformationFilter); } CryptStreamBuffer::CryptStreamBuffer(ostream& encryptedOutput, CryptoPP::StreamTransformation& c) : m_NextChar(traits_type::eof()), m_StreamTransformationFilter(0), m_Source(0), m_Sink(0) { m_Sink = new CryptoPP::FileSink(encryptedOutput); m_StreamTransformationFilter = new CryptoPP::StreamTransformationFilter(c, m_Sink); } CryptStreamBuffer::int_type CryptStreamBuffer::overflow(int_type ch) { return m_StreamTransformationFilter->Put((byte)ch); } CryptStreamBuffer::int_type CryptStreamBuffer::uflow() { int_type result = GetNextChar(); // Reset the buffered character m_NextChar = traits_type::eof(); return result; } CryptStreamBuffer::int_type CryptStreamBuffer::underflow() { return GetNextChar(); } CryptStreamBuffer::int_type CryptStreamBuffer::pbackfail(int_type ch) { return traits_type::eof(); } int CryptStreamBuffer::sync() { if (m_Sink) { m_StreamTransformationFilter->MessageEnd(); } } int CryptStreamBuffer::GetNextChar() { // If we have a buffered character do nothing if (m_NextChar != traits_type::eof()) { return m_NextChar; } // If there are no more bytes currently available then pump the source // *** I SUSPECT THE PROBLEM IS HERE *** if (m_StreamTransformationFilter->MaxRetrievable() == 0) { m_Source->Pump(1024); } // Retrieve the next byte byte nextByte; size_t noBytes = m_StreamTransformationFilter->Get(nextByte); if (0 == noBytes) { return traits_type::eof(); } // Buffer up the next character m_NextChar = nextByte; return m_NextChar; } void InitKey(byte key[]) { key[0] = -62; key[1] = 102; key[2] = 78; key[3] = 75; key[4] = -96; key[5] = 125; key[6] = 66; key[7] = 125; key[8] = -95; key[9] = -66; key[10] = 114; key[11] = 22; key[12] = 48; key[13] = 111; key[14] = -51; key[15] = 112; } void DecryptFile(const char* sourceFileName, const char* destFileName) { ifstream ifs(sourceFileName, ios::in | ios::binary); ofstream ofs(destFileName, ios::out | ios::binary); byte key[CryptoPP::AES::DEFAULT_KEYLENGTH]; InitKey(key); CryptoPP::ECB_Mode<CryptoPP::AES>::Decryption decryptor(key, sizeof(key)); if (ifs) { if (ofs) { CryptStreamBuffer cryptBuf(ifs, decryptor); std::istream decrypt(&cryptBuf); int c; while (EOF != (c = decrypt.get())) { ofs << (char)c; } ofs.flush(); } else { std::cerr << "Failed to open file '" << destFileName << "'." << endl; } } else { std::cerr << "Failed to open file '" << sourceFileName << "'." << endl; } } void EncryptFile(const char* sourceFileName, const char* destFileName) { ifstream ifs(sourceFileName, ios::in | ios::binary); ofstream ofs(destFileName, ios::out | ios::binary); byte key[CryptoPP::AES::DEFAULT_KEYLENGTH]; InitKey(key); CryptoPP::ECB_Mode<CryptoPP::AES>::Encryption encryptor(key, sizeof(key)); if (ifs) { if (ofs) { CryptStreamBuffer cryptBuf(ofs, encryptor); std::ostream encrypt(&cryptBuf); int c; while (EOF != (c = ifs.get())) { encrypt << (char)c; } encrypt.flush(); } else { std::cerr << "Failed to open file '" << destFileName << "'." << endl; } } else { std::cerr << "Failed to open file '" << sourceFileName << "'." << endl; } } int main(int argc, char* argv[]) { EncryptFile(argv[1], "encrypted.out"); DecryptFile("encrypted.out", "decrypted.out"); return 0; }

    Read the article

  • Can anyone explain why my crypto++ decrypted file is 16 bytes short?

    - by Tom Williams
    I suspect it might be too much to hope for, but can anyone with experience with crypto++ explain why the "decrypted.out" file created by main() is 16 characters short (which probably not coincidentally is the block size)? I think the issue must be in CryptStreamBuffer::GetNextChar(), but I've been staring at it and the crypto++ documentation for hours. Any other comments about how crummy or naive my std::streambuf implementation are also welcome ;-) And I've just noticed I'm missing some calls to delete so you don't have to tell me about those. Thanks, Tom // Runtime Includes #include <iostream> // Crypto++ Includes #include "aes.h" #include "modes.h" // xxx_Mode< > #include "filters.h" // StringSource and // StreamTransformation #include "files.h" using namespace std; class CryptStreamBuffer: public std::streambuf { public: CryptStreamBuffer(istream& encryptedInput, CryptoPP::StreamTransformation& c); CryptStreamBuffer(ostream& encryptedOutput, CryptoPP::StreamTransformation& c); protected: virtual int_type overflow(int_type ch = traits_type::eof()); virtual int_type uflow(); virtual int_type underflow(); virtual int_type pbackfail(int_type ch); virtual int sync(); private: int GetNextChar(); int m_NextChar; // Buffered character CryptoPP::StreamTransformationFilter* m_StreamTransformationFilter; CryptoPP::FileSource* m_Source; CryptoPP::FileSink* m_Sink; }; // class CryptStreamBuffer CryptStreamBuffer::CryptStreamBuffer(istream& encryptedInput, CryptoPP::StreamTransformation& c) : m_NextChar(traits_type::eof()), m_StreamTransformationFilter(0), m_Source(0), m_Sink(0) { m_StreamTransformationFilter = new CryptoPP::StreamTransformationFilter(c); m_Source = new CryptoPP::FileSource(encryptedInput, false, m_StreamTransformationFilter); } CryptStreamBuffer::CryptStreamBuffer(ostream& encryptedOutput, CryptoPP::StreamTransformation& c) : m_NextChar(traits_type::eof()), m_StreamTransformationFilter(0), m_Source(0), m_Sink(0) { m_Sink = new CryptoPP::FileSink(encryptedOutput); m_StreamTransformationFilter = new CryptoPP::StreamTransformationFilter(c, m_Sink); } CryptStreamBuffer::int_type CryptStreamBuffer::overflow(int_type ch) { return m_StreamTransformationFilter->Put((byte)ch); } CryptStreamBuffer::int_type CryptStreamBuffer::uflow() { int_type result = GetNextChar(); // Reset the buffered character m_NextChar = traits_type::eof(); return result; } CryptStreamBuffer::int_type CryptStreamBuffer::underflow() { return GetNextChar(); } CryptStreamBuffer::int_type CryptStreamBuffer::pbackfail(int_type ch) { return traits_type::eof(); } int CryptStreamBuffer::sync() { if (m_Sink) { m_StreamTransformationFilter->MessageEnd(); } } int CryptStreamBuffer::GetNextChar() { // If we have a buffered character do nothing if (m_NextChar != traits_type::eof()) { return m_NextChar; } // If there are no more bytes currently available then pump the source // *** I SUSPECT THE PROBLEM IS HERE *** if (m_StreamTransformationFilter->MaxRetrievable() == 0) { m_Source->Pump(1024); } // Retrieve the next byte byte nextByte; size_t noBytes = m_StreamTransformationFilter->Get(nextByte); if (0 == noBytes) { return traits_type::eof(); } // Buffer up the next character m_NextChar = nextByte; return m_NextChar; } void InitKey(byte key[]) { key[0] = -62; key[1] = 102; key[2] = 78; key[3] = 75; key[4] = -96; key[5] = 125; key[6] = 66; key[7] = 125; key[8] = -95; key[9] = -66; key[10] = 114; key[11] = 22; key[12] = 48; key[13] = 111; key[14] = -51; key[15] = 112; } void DecryptFile(const char* sourceFileName, const char* destFileName) { ifstream ifs(sourceFileName, ios::in | ios::binary); ofstream ofs(destFileName, ios::out | ios::binary); byte key[CryptoPP::AES::DEFAULT_KEYLENGTH]; InitKey(key); CryptoPP::ECB_Mode<CryptoPP::AES>::Decryption decryptor(key, sizeof(key)); if (ifs) { if (ofs) { CryptStreamBuffer cryptBuf(ifs, decryptor); std::istream decrypt(&cryptBuf); int c; while (EOF != (c = decrypt.get())) { ofs << (char)c; } ofs.flush(); } else { std::cerr << "Failed to open file '" << destFileName << "'." << endl; } } else { std::cerr << "Failed to open file '" << sourceFileName << "'." << endl; } } void EncryptFile(const char* sourceFileName, const char* destFileName) { ifstream ifs(sourceFileName, ios::in | ios::binary); ofstream ofs(destFileName, ios::out | ios::binary); byte key[CryptoPP::AES::DEFAULT_KEYLENGTH]; InitKey(key); CryptoPP::ECB_Mode<CryptoPP::AES>::Encryption encryptor(key, sizeof(key)); if (ifs) { if (ofs) { CryptStreamBuffer cryptBuf(ofs, encryptor); std::ostream encrypt(&cryptBuf); int c; while (EOF != (c = ifs.get())) { encrypt << (char)c; } encrypt.flush(); } else { std::cerr << "Failed to open file '" << destFileName << "'." << endl; } } else { std::cerr << "Failed to open file '" << sourceFileName << "'." << endl; } } int main(int argc, char* argv[]) { EncryptFile(argv[1], "encrypted.out"); DecryptFile("encrypted.out", "decrypted.out"); return 0; }

    Read the article

  • UIImagePickerController, UIImage, Memory and More!

    - by Itay
    I've noticed that there are many questions about how to handle UIImage objects, especially in conjunction with UIImagePickerController and then displaying it in a view (usually a UIImageView). Here is a collection of common questions and their answers. Feel free to edit and add your own. I obviously learnt all this information from somewhere too. Various forum posts, StackOverflow answers and my own experimenting brought me to all these solutions. Credit goes to those who posted some sample code that I've since used and modified. I don't remember who you all are - but hats off to you! How Do I Select An Image From the User's Images or From the Camera? You use UIImagePickerController. The documentation for the class gives a decent overview of how one would use it, and can be found here. Basically, you create an instance of the class, which is a modal view controller, display it, and set yourself (or some class) to be the delegate. Then you'll get notified when a user selects some form of media (movie or image in 3.0 on the 3GS), and you can do whatever you want. My Delegate Was Called - How Do I Get The Media? The delegate method signature is the following: - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info; You should put a breakpoint in the debugger to see what's in the dictionary, but you use that to extract the media. For example: UIImage* image = [info objectForKey:UIImagePickerControllerOriginalImage]; There are other keys that work as well, all in the documentation. OK, I Got The Image, But It Doesn't Have Any Geolocation Data. What gives? Unfortunately, Apple decided that we're not worthy of this information. When they load the data into the UIImage, they strip it of all the EXIF/Geolocation data. Can I Get To The Original File Representing This Image on the Disk? Nope. For security purposes, you only get the UIImage. How Can I Look At The Underlying Pixels of the UIImage? Since the UIImage is immutable, you can't look at the direct pixels. However, you can make a copy. The code to this looks something like this: UIImage* image = ...; // An image NSData* pixelData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage)); unsigned char* pixelBytes = (unsigned char *)[pixelData bytes]; // Take away the red pixel, assuming 32-bit RGBA for(int i = 0; i < [pixelData length]; i += 4) { pixelBytes[i] = 0; // red pixelBytes[i+1] = pixelBytes[i+1]; // green pixelBytes[i+2] = pixelBytes[i+2]; // blue pixelBytes[i+3] = pixelBytes[i+3]; // alpha } However, note that CGDataProviderCopyData provides you with an "immutable" reference to the data - meaning you can't change it (and you may get a BAD_ACCESS error if you do). Look at the next question if you want to see how you can modify the pixels. How Do I Modify The Pixels of the UIImage? The UIImage is immutable, meaning you can't change it. Apple posted a great article on how to get a copy of the pixels and modify them, and rather than copy and paste it here, you should just go read the article. Once you have the bitmap context as they mention in the article, you can do something similar to this to get a new UIImage with the modified pixels: CGImageRef ref = CGBitmapContextCreateImage(bitmap); UIImage* newImage = [UIImage imageWithCGImage:ref]; Do remember to release your references though, otherwise you're going to be leaking quite a bit of memory. After I Select 3 Images From The Camera, I Run Out Of Memory. Help! You have to remember that even though on disk these images take up only a few hundred kilobytes at most, that's because they're compressed as a PNG or JPG. When they are loaded into the UIImage, they become uncompressed. A quick over-the-envelope calculation would be: width x height x 4 = bytes in memory That's assuming 32-bit pixels. If you have 16-bit pixels (some JPGs are stored as RGBA-5551), then you'd replace the 4 with a 2. Now, images taken with the camera are 1600 x 1200 pixels, so let's do the math: 1600 x 1200 x 4 = 7,680,000 bytes = ~8 MB 8 MB is a lot, especially when you have a limit of around 24 MB for your application. That's why you run out of memory. OK, I Understand Why I Have No Memory. What Do I Do? There is never any reason to display images at their full resolution. The iPhone has a screen of 480 x 320 pixels, so you're just wasting space. If you find yourself in this situation, ask yourself the following question: Do I need the full resolution image? If the answer is yes, then you should save it to disk for later use. If the answer is no, then read the next part. Once you've decided what to do with the full-resolution image, then you need to create a smaller image to use for displaying. Many times you might even want several sizes for your image: a thumbnail, a full-size one for displaying, and the original full-resolution image. OK, I'm Hooked. How Do I Resize the Image? Unfortunately, there is no defined way how to resize an image. Also, it's important to note that when you resize it, you'll get a new image - you're not modifying the old one. There are a couple of methods to do the resizing. I'll present them both here, and explain the pros and cons of each. Method 1: Using UIKit + (UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize; { // Create a graphics image context UIGraphicsBeginImageContext(newSize); // Tell the old image to draw in this new context, with the desired // new size [image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)]; // Get the new image from the context UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext(); // End the context UIGraphicsEndImageContext(); // Return the new image. return newImage; } This method is very simple, and works great. It will also deal with the UIImageOrientation for you, meaning that you don't have to care whether the camera was sideways when the picture was taken. However, this method is not thread safe, and since thumbnailing is a relatively expensive operation (approximately ~2.5s on a 3G for a 1600 x 1200 pixel image), this is very much an operation you may want to do in the background, on a separate thread. Method 2: Using CoreGraphics + (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSize:(CGSize)newSize; { CGFloat targetWidth = targetSize.width; CGFloat targetHeight = targetSize.height; CGImageRef imageRef = [sourceImage CGImage]; CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef); CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef); if (bitmapInfo == kCGImageAlphaNone) { bitmapInfo = kCGImageAlphaNoneSkipLast; } CGContextRef bitmap; if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) { bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo); } else { bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo); } if (sourceImage.imageOrientation == UIImageOrientationLeft) { CGContextRotateCTM (bitmap, radians(90)); CGContextTranslateCTM (bitmap, 0, -targetHeight); } else if (sourceImage.imageOrientation == UIImageOrientationRight) { CGContextRotateCTM (bitmap, radians(-90)); CGContextTranslateCTM (bitmap, -targetWidth, 0); } else if (sourceImage.imageOrientation == UIImageOrientationUp) { // NOTHING } else if (sourceImage.imageOrientation == UIImageOrientationDown) { CGContextTranslateCTM (bitmap, targetWidth, targetHeight); CGContextRotateCTM (bitmap, radians(-180.)); } CGContextDrawImage(bitmap, CGRectMake(0, 0, targetWidth, targetHeight), imageRef); CGImageRef ref = CGBitmapContextCreateImage(bitmap); UIImage* newImage = [UIImage imageWithCGImage:ref]; CGContextRelease(bitmap); CGImageRelease(ref); return newImage; } The benefit of this method is that it is thread-safe, plus it takes care of all the small things (using correct color space and bitmap info, dealing with image orientation) that the UIKit version does. How Do I Resize and Maintain Aspect Ratio (like the AspectFill option)? It is very similar to the method above, and it looks like this: + (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSizeWithSameAspectRatio:(CGSize)targetSize; { CGSize imageSize = sourceImage.size; CGFloat width = imageSize.width; CGFloat height = imageSize.height; CGFloat targetWidth = targetSize.width; CGFloat targetHeight = targetSize.height; CGFloat scaleFactor = 0.0; CGFloat scaledWidth = targetWidth; CGFloat scaledHeight = targetHeight; CGPoint thumbnailPoint = CGPointMake(0.0,0.0); if (CGSizeEqualToSize(imageSize, targetSize) == NO) { CGFloat widthFactor = targetWidth / width; CGFloat heightFactor = targetHeight / height; if (widthFactor > heightFactor) { scaleFactor = widthFactor; // scale to fit height } else { scaleFactor = heightFactor; // scale to fit width } scaledWidth = width * scaleFactor; scaledHeight = height * scaleFactor; // center the image if (widthFactor > heightFactor) { thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5; } else if (widthFactor < heightFactor) { thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5; } } CGImageRef imageRef = [sourceImage CGImage]; CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef); CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef); if (bitmapInfo == kCGImageAlphaNone) { bitmapInfo = kCGImageAlphaNoneSkipLast; } CGContextRef bitmap; if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) { bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo); } else { bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo); } // In the right or left cases, we need to switch scaledWidth and scaledHeight, // and also the thumbnail point if (sourceImage.imageOrientation == UIImageOrientationLeft) { thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x); CGFloat oldScaledWidth = scaledWidth; scaledWidth = scaledHeight; scaledHeight = oldScaledWidth; CGContextRotateCTM (bitmap, radians(90)); CGContextTranslateCTM (bitmap, 0, -targetHeight); } else if (sourceImage.imageOrientation == UIImageOrientationRight) { thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x); CGFloat oldScaledWidth = scaledWidth; scaledWidth = scaledHeight; scaledHeight = oldScaledWidth; CGContextRotateCTM (bitmap, radians(-90)); CGContextTranslateCTM (bitmap, -targetWidth, 0); } else if (sourceImage.imageOrientation == UIImageOrientationUp) { // NOTHING } else if (sourceImage.imageOrientation == UIImageOrientationDown) { CGContextTranslateCTM (bitmap, targetWidth, targetHeight); CGContextRotateCTM (bitmap, radians(-180.)); } CGContextDrawImage(bitmap, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), imageRef); CGImageRef ref = CGBitmapContextCreateImage(bitmap); UIImage* newImage = [UIImage imageWithCGImage:ref]; CGContextRelease(bitmap); CGImageRelease(ref); return newImage; } The method we employ here is to create a bitmap with the desired size, but draw an image that is actually larger, thus maintaining the aspect ratio. So We've Got Our Scaled Images - How Do I Save Them To Disk? This is pretty simple. Remember that we want to save a compressed version to disk, and not the uncompressed pixels. Apple provides two functions that help us with this (documentation is here): NSData* UIImagePNGRepresentation(UIImage *image); NSData* UIImageJPEGRepresentation (UIImage *image, CGFloat compressionQuality); And if you want to use them, you'd do something like: UIImage* myThumbnail = ...; // Get some image NSData* imageData = UIImagePNGRepresentation(myThumbnail); Now we're ready to save it to disk, which is the final step (say into the documents directory): // Give a name to the file NSString* imageName = @"MyImage.png"; // Now, we have to find the documents directory so we can save it // Note that you might want to save it elsewhere, like the cache directory, // or something similar. NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString* documentsDirectory = [paths objectAtIndex:0]; // Now we get the full path to the file NSString* fullPathToFile = [documentsDirectory stringByAppendingPathComponent:imageName]; // and then we write it out [imageData writeToFile:fullPathToFile atomically:NO]; You would repeat this for every version of the image you have. How Do I Load These Images Back Into Memory? Just look at the various UIImage initialization methods, such as +imageWithContentsOfFile: in the Apple documentation.

    Read the article

  • Secure wipe of a hard drive using WinPE.

    - by Derek Meier
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} The wiping of a hard drive is typically seen as fairly trivial.  There are tons of applications out there that will do it for you.  Point àClickàGlobal-Thermo Nuclear War. However, these applications are typically expensive or unreliable.  Plus, if you have a laptop or lack a secondary computer to put the hard drive into – how on earth do you wipe it quickly and easily while still conforming to a 7 pass rule (this means that every possible bit on the hard drive is set to 0 and then to 1 seven times in a row)?  Yes, one pass should be enough – as turning every bit from a 1 to a zero will wipe the data from existence.  But, we’re dealing with tinfoil hat wearing types here people.  DOD standards dictate at least 3 passes, and typically 7 is the preferred amount.  I’m not going to argue about data recovery.  I have been told to use 7 passes, and so I will.  So say we all! Quite some time ago I used to make a BartPE XP-based boot cd for the original purpose of securely wiping data.  I loved BartPE and integrated so many plugins into my builds that I could do pretty much anything directly from CD.  Reset passwords, uninstall security updates, wipe drives, chkdsk, remove spyware, install Windows, etc.  However, with the newer multi-core systems and new chipsets coming out from vendors, I found that BartPE was rather difficult to keep up to date.  I have since switched to WinPE 3.0 (Windows Preinstallation Environment). http://technet.microsoft.com/en-us/library/cc748933(WS.10).aspx  It is fairly simple to create your own CD, and I have made a few helpful scripts to easily integrate drivers and rebuild the ISO file for you.  I’ll cover making your own boot CD utilizing WinPE 3.0 in a later post – I can talk about WinPE forever and need to collect my thoughts!!  My wife loves talking about WinPE almost as much as talking about Doctor Who.  Wait, did I say loves?  Hmmmm, I may have meant loathes. The topic at hand?  Right. Wiping a drive! I must have drunk too much coffee this morning.  I like to use a simple batch script that calls a combination of diskpart.exe from Microsoft® and Sdelete.exe created by our friend Mark Russinovich. http://technet.microsoft.com/en-us/sysinternals/bb897443.aspx All of the following files are located within the same directory on my WinPE boot CD. Here are the contents of wipe_me.bat, script.txt and sdelete.reg. Wipe_me.bat:   @echo off echo. echo     I will completely wipe the local hard drives using echo     7 individual wipes. The data will NOT echo     be recoverable.  I will begin after you pause echo. echo Preparing to partition and format disk. Diskpart.exe /s "script.txt" REM I was annoyed by not having a completely automated script – and Sdelete wants you to accept the license agreement. So, I added a registry file to skip doing that. regedit /S sdelete.reg rem sdelete options selected are: -p (passes) -c (zero free space) -s (recurse through subdirectories, if any) -z (clean free space) [drive letter] sdelete.exe -p 7 -c -s -z c: echo. echo Pass seven complete. echo. echo Wiping complete. Pause exit script.txt: list disk select disk 0 clean create partition primary select partition 1 active format FS=NTFS LABEL="New Volume" QUICK assign letter=c exit *Notes: This script assumes one local hard drive – change the script as you see fit for your environment.  The clean command will overwrite the master boot record and any hidden sector information – so be careful!   sdelete.reg: Windows Registry Editor Version 5.00 [HKEY_CURRENT_USER\Software\Sysinternals\SDelete] "EulaAccepted"=dword:00000001   With a combination of WinPE, sdelete.exe and your friendly neighborhood text editor you can begin wiping drives as quickly and easily as possible!  I hope this helps, I get asked this a lot in my line of work. Best of luck, Derek

    Read the article

  • Entity Framework 4.0: Creating objects of correct type when using lazy loading

    - by DigiMortal
    In my posting about Entity Framework 4.0 and POCOs I introduced lazy loading in EF applications. EF uses proxy classes for lazy loading and this means we have new types in that come and go dynamically in runtime. We don’t have these types available when we write code but we cannot forget that EF may expect us to use dynamically generated types. In this posting I will give you simple hint how to use correct types in your code. The background of lazy loading and proxy classes As a first thing I will explain you in short what is proxy class. Business classes when designed correctly have no knowledge about their birth and death – they don’t know how they are created and they don’t know how their data is persisted. This is the responsibility of object runtime. When we use lazy loading we need a little bit different classes that know how to load data for properties when code accesses the property first time. As we cannot add this functionality to our business classes (they may be stored through more than one data access technology or by more than one Data Access Layer (DAL)) we create proxy classes that extend our business classes. If we have class called Product and product has lazy loaded property called Customer then we need proxy class, let’s say ProductProxy, that has same public signature as Product so we can use it INSTEAD OF product in our code. ProductProxy overrides Customer property. If customer is not asked then customer is null. But if we ask for Customer property then overridden property of ProductProxy loads it from database. This is how lazy loading works. Problem – two types for same thing As lazy loading may introduce dynamically generated proxy types we don’t know in our application code which type is returned. We cannot be sure that we have Product not ProductProxy returned. This leads us to the following question: how can we create Product of correct type if we don’t know the correct type? In EF solution is simple. Solution – use factory methods If you are using repositories and you are not using factories (imho it is pretty pointless with mapper) you can add factory methods to your EF based repositories. Take a look at this class. public class Event {     public int ID { get; set; }     public string Title { get; set; }     public string Location { get; set; }     public virtual Party Organizer { get; set; }     public DateTime Date { get; set; } } We have virtual member called Organizer. This property is virtual because we want to use lazy loading on this class so Organizer is loaded only when we ask it. EF provides us with method called CreateObject<T>(). CreateObject<T>() is member of ObjectContext class and it creates the object based on given type. In runtime proxy type for Event is created for us automatically and when we call CreateObject<T>() for Event it returns as object of Event proxy type. The factory method for events repository is as follows. public Event CreateEvent() {     var evt = _context.CreateObject<Event>();     return evt; } And we are done. Instead of creating factory classes we created factory methods that guarantee that created objects are of correct type. Conclusion Although lazy loading introduces some new objects we cannot use at design time because they live only in runtime we can write code without worrying about exact implementation type of object. This holds true until we have clean code and we don’t make any decisions based on object type. EF4.0 provides us with very simple factory method that create and return objects of correct type. All we had to do was adding factory methods to our repositories.

    Read the article

  • Protecting offline IRM rights and the error "Unable to Connect to Offline database"

    - by Simon Thorpe
    One of the most common problems I get asked about Oracle IRM is in relation to the error message "Unable to Connect to Offline database". This error message is a result of how Oracle IRM is protecting the cached rights on the local machine and if that cache has become invalid in anyway, this error is thrown. Offline rights and security First we need to understand how Oracle IRM handles offline use. The way it is implemented is one of the main reasons why Oracle IRM is the leading document security solution and demonstrates our methodology to ensure that solutions address both security and usability and puts the balance of these two in your control. Each classification has a set of predefined roles that the manager of the classification can assign to users. Each role has an offline period which determines the amount of time a user can access content without having to communicate with the IRM server. By default for the context model, which is the classification system that ships out of the box with Oracle IRM, the offline period for each role is 3 days. This is easily changed however and can be as low as under an hour to as long as years. It is also possible to switch off the ability to access content offline which can be useful when content is very sensitive and requires a tight leash. So when a user is online, transparently in the background, the Oracle IRM Desktop communicates with the server and updates the users rights and offline periods. This transparent synchronization period is determined by the server and communicated to all IRM Desktops and allows for users rights to be kept up to date without their intervention. This allows us to support some very important scenarios which are key to a successful IRM solution. A user doesn't have to make any decision when going offline, they simply unplug their laptop and they already have their offline periods synchronized to the maximum values. Any solution that requires a user to make a decision at the point of going offline isn't going to work because people forget to do this and will therefore be unable to legitimately access their content offline. If your rights change to REMOVE your access to content, this also happens in the background. This is very useful when someone has an offline duration of a week and they happen to make a connection to the internet 3 days into that offline period, the Oracle IRM Desktop detects this online state and automatically updates all rights for the user. This means the business risk is reduced when setting long offline periods, because of the daily transparent sync, you can reflect changes as soon as the user is online. Of course, if they choose not to come online at all during that week offline period, you cannot effect change, but you take that risk in giving the 7 day offline period in the first place. If you are added to a NEW classification during the day, this will automatically be synchronized without the user even having to open a piece of content secured against that classification. This is very important, consider the scenario where a senior executive downloads all their email but doesn't open any of it. Disconnects the laptop and then gets on a plane. During the flight they attempt to open a document attached to a downloaded email which has been secured against an IRM classification the user was not even aware they had access to. Because their new role in this classification was automatically synchronized their experience is a good one and the document opens. More information on how the Oracle IRM classification model works can be found in this article by Martin Abrahams. So what about problems accessing the offline rights database? So onto the core issue... when these rights are cached to your machine they are stored in an encrypted database. The encryption of this offline database is keyed to the instance of the installation of the IRM Desktop and the Windows user account. Why? Well what you do not want to happen is for someone to get their rights for content and then copy these files across hundreds of other machines, therefore getting access to sensitive content across many environments. The IRM server has a setting which controls how many times you can cache these rights on unique machines. This is because people typically access IRM content on more than one computer. Their work desktop, a laptop and often a home computer. So Oracle IRM allows for the usability of caching rights on more than one computer whilst retaining strong security over this cache. So what happens if these files are corrupted in someway? That's when you will see the error, Unable to Connect to Offline database. The most common instance of seeing this is when you are using virtual machines and copy them from one computer to the next. The virtual machine software, VMWare Workstation for example, makes changes to the unique information of that virtual machine and as such invalidates the offline database. How do you solve the problem? Resolution is however simple. You just delete all of the offline database files on the machine and they will be recreated with working encryption when the Oracle IRM Desktop next starts. However this does mean that the IRM server will think you have your rights cached to more than one computer and you will need to rerequest your rights, even though you are only going to be accessing them on one. Because it still thinks the old cache is valid. So be aware, it is good practice to increase the server limit from the default of 1 to say 3 or 4. This is done using the Enterprise Manager instance of IRM. So to delete these offline files I have a simple .bat file you can use; Download DeleteOfflineDBs.bat Note that this uses pskillto stop the irmBackground.exe from running. This is part of the IRM Desktop and holds open a lock to the offline database. Either kill this from task manager or use pskillas part of the script.

    Read the article

  • Leaks on Wikis: "Corporations...You're Next!" Oracle Desktop Virtualization Can Help.

    - by adam.hawley
    Between all the press coverage on the unauthorized release of 251,287 diplomatic documents and on previous extensive releases of classified documents on the events in Iraq and Afghanistan, one could be forgiven for thinking massive leaks are really an issue for governments, but it is not: It is an issue for corporations as well. In fact, corporations are apparently set to be the next big target for things like Wikileaks. Just the threat of such a release against one corporation recently caused the price of their stock to drop 3% after the leak organization claimed to have 5GB of information from inside the company, with the implication that it might be damaging or embarrassing information. At the moment of this blog anyway, we don't know yet if that is true or how they got the information but how did the diplomatic cable leak happen? For the diplomatic cables, according to press reports, a private in the military, with some appropriate level of security clearance (that is, he apparently had the correct level of security clearance to be accessing the information...he reportedly didn't "hack" his way through anything to get to the documents which might have raised some red flags...), is accused of accessing the material and copying it onto a writeable CD labeled "Lady Gaga" and walking out the door with it. Upload and... Done. In the same article, the accused is quoted as saying "Information should be free. It belongs in the public domain." Now think about all the confidential information in your company or non-profit... from credit card information, to phone records, to customer or donor lists, to corporate strategy documents, product cost information, etc, etc.... And then think about that last quote above from what was a very junior level person in the organization...still feeling comfortable with your ability to control all your information? So what can you do to guard against these types of breaches where there is no outsider (or even insider) intrusion to detect per se, but rather someone with malicious intent is physically walking out the door with data that they are otherwise allowed to access in their daily work? A major first step it to make it physically, logistically much harder to walk away with the information. If the user with malicious intent has no way to copy to removable or moble media (USB sticks, thumb drives, CDs, DVDs, memory cards, or even laptop disk drives) then, as a practical matter it is much more difficult to physically move the information outside the firewall. But how can you control access tightly and reliably and still keep your hundreds or even thousands of users productive in their daily job? Oracle Desktop Virtualization products can help.Oracle's comprehensive suite of desktop virtualization and access products allow your applications and, most importantly, the related data, to stay in the (highly secured) data center while still allowing secure access from just about anywhere your users need to be to be productive.  Users can securely access all the data they need to do their job, whether from work, from home, or on the road and in the field, but fully configurable policies set up centrally by privileged administrators allow you to control whether, for instance, they are allowed to print documents or use USB devices or other removable media.  Centrally set policies can also control not only whether they can download to removable devices, but also whether they can upload information (see StuxNet for why that is important...)In fact, by using Sun Ray Client desktop hardware, which does not contain any disk drives, or removable media drives, even theft of the desktop device itself would not make you vulnerable to data loss, unlike a laptop that can be stolen with hundreds of gigabytes of information on its disk drive.  And for extreme security situations, Sun Ray Clients even come standard with the ability to use fibre optic ethernet networking to each client to prevent the possibility of unauthorized monitoring of network traffic.But even without Sun Ray Client hardware, users can leverage Oracle's Secure Global Desktop software or the Oracle Virtual Desktop Client to securely access server-resident applications, desktop sessions, or full desktop virtual machines without persisting any application data on the desktop or laptop being used to access the information.  And, again, even in this context, the Oracle products allow you to control what gets uploaded, downloaded, or printed for example.Another benefit of Oracle's Desktop Virtualization and access products is the ability to rapidly and easily shut off user access centrally through administrative polices if, for example, an employee changes roles or leaves the company and should no longer have access to the information.Oracle's Desktop Virtualization suite of products can help reduce operating expense and increase user productivity, and those are good reasons alone to consider their use.  But the dynamics of today's world dictate that security is one of the top reasons for implementing a virtual desktop architecture in enterprises.For more information on these products, view the webpages on www.oracle.com and the Oracle Technology Network website.

    Read the article

  • Top Tweets SOA Partner Community – November 2011

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity soacommunity SOA Community Dutch ACEs SOA Partner Community award celebration wp.me/p10C8u-i9 OracleBPM Gauging Maturity of your BPM Strategy – part 1/2, bit.ly/vJE9UZ MagicChatzi Dutch ACE’s and ACE Directors had a small party: achatzia.blogspot.com/2011/11/celebr… leonsmiers #Capgemini #Oracle #BPM Blog index bit.ly/tUYtvD #yam lucasjellema Blog post by my colleague Emiel on the AMIS blog: Timeouts in Oracle SOA Suite 11g – tinyurl.com/73amo3r biemond Solving __OAUX_GENXSD_.TOP.XSD with BPEL: When you use an external web service in combination with a BPEL servic… t.co/Gzzatzrr OracleBlogs Jumpstart Fusion Middleware projects with Oracle User Productivity Kit ow.ly/1fJMev cpurdy on Oracle Coherence data grid, its new RESTful APIs, and Oracle Service Bus (OSB): blogs.oracle.com/slc/entry/orac… Accenture Learn how Service-Oriented Architecture can help public service agencies solve legacy system issues. bit.ly/sTteM4 #SOA eelzinga Thanks for organising it Andreas! #soacommunity eelzinga Had a nice drink with the fellow Dutch Oracle ACE members for a little celebration of the SOA Community Partner Award. #soacommunity EmielP Wrote a blogpost about timeouts in the #Oracle #SOA Suite: bit.ly/uhUcrX OracleBlogs Processing Binary Data in SOA Suite 11g t.co/Tzd1xBsY OracleBlogs Finding the Value in SOA by Stephen Bennett t.co/9MMLJoLz OTNArchBeat SOA All the Time; Architects in AZ; Clearing Info Integration hurdles t.co/5viNj8ib OracleBlogs Demo: Business Transaction Management with SOA Management Pack ow.ly/1fFBv3 OTNArchBeat SOA All the Time; Architects in AZ; Clearing Info Integration hurdles t.co/Dnfzo0PN oracletechnet Wikis.oracle.com lives leonsmiers A new #capgemini #oracle #blog, Measuring the Human Task activity in Oracle BPM bit.ly/uPan08 #yam @CapgeminiOracle OTNArchBeat 3 SOA business cases, explained in a 2-minute elevator speech | @JoeMcKendrick t.co/aYGNkZup OTNArchBeat Gartner, Inc. places Oracle SOA Governance in Magic Quadrant for SOA Governance Technologies t.co/bSG5cuTr Jphjulstad Red carpet to Oracle BPM – evita.no evita.no/ikbViewer/soa-… Oracle #Oracle Named a Leader in #SOA Governance Magic Quadrant by Leading Analyst Firm t.co/prnyGu2U soacommunity What presentations & topics do you like to see at the next SOA & BPM & Webcenter Community Forum early 2012? #soacommunity soacommunity Oracle BPM Suite 11g Handbook Released wp.me/p10C8u-hU OTNArchBeat SOA Development Virtual Developer Day (On Demand) | @soacommunity bit.ly/sqhQmX OracleBlogs SOA Development Virtual Developer Day (On Demand) t.co/MDrdnx0h 9 Nov Favorite Undo Retweet Reply OracleBlogs Specialized Partners Only! New Service to Promote Your Events t.co/qTgyEpY4 biemond @stevendavelaar this is for you t.co/hInKCcfY it explains your sso problem soacommunity SOA Development Virtual Developer Day (on demand) t.co/flXPWk4R soacommunity IPT Swiss SOA Experts – thanks for the nice ink wp.me/p10C8u-i3 soacommunity Enjoy #wjax specially the presentations from our #ACE @t_winterberg @myfear @AdamBien pic.twitter.com/m8VcBSG3 OTNArchBeat Discounts on books, more, for Oracle Technology Network members bit.ly/vRxMfB OracleSOA Justify the ROI of SOA in 10 seconds…a pic is worth 1000 words bit.ly/roi_of_soa_img #oraclesoa #soa #oow11 orclateamsoa A-Team SOA Blog: Case Management in BPM 11g -  Mark Foster Oracle BPM 11g & Case Management I’ve seen… t.co/l5zb6pFr t_winterberg Die nächste SIG #SOA steht an: 7.12. in Hamburg. Neues Tooling und Erfahrungen rund um Oracle FMW, SOA, BPM… (cont) deck.ly/~YC57v OracleBlogs Continuous Integration for SOA/BPM ow.ly/1fsekI OracleBlogs BPM Suite 11g Handbook Released ow.ly/1frlzv lucasjellema Iterating over collection (array) in BPM (and dispatching jobs for entries in array): t.co/1SEhSvWv – subprocesses are the key. lucasjellema Lucas Jellema Useful tip from Mark Nelson: BPM API documentation (as well as Human Workflow Service) available: redstack.wordpress.com/2011/09/28/api… OTNArchBeat SOA, cloud: it’s the architecture that matters | Joe McKendrick zd.net/tNCiTF orclateamsoa: Building a job dispatcher in BPM -or- Iterating over collections in BPM ow.ly/1frbrz orclateamsoa Using the Database as a Policy Store for SOA 11g ow.ly/1frbrA OracleBPM Oracle launches Process Accelerators for BPM: t.co/XPEE61QL Jphjulstad Human-Centric BPM Selection Checklist t.co/3TZXZHLH OracleBlogs Fusion Middleware General Session at OOW 2011: Missed It? Read On… t.co/aU5JvM6K gschmutz Great! The product page of the OSB 11g Development Cookbook is now online: t.co/5Jfbe6Ng Looking forward to get it, u too? brhubart Oracle IT Architecture Essentials; Lightweight Composite Service Development with SCA and Spring; Cloud Migration ow.ly/7esNg eelzinga New blogpost : Oracle Service Bus, Generic fault handling, bit.ly/sGr4UL #osb #oracleservicebus For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: soacommunity,twitter,Oracle,SOA Community,Jürgen Kress,OPN

    Read the article

  • ASP.NET MVC: Converting business objects to select list items

    - by DigiMortal
    Some of our business classes are used to fill dropdown boxes or select lists. And often you have some base class for all your business classes. In this posting I will show you how to use base business class to write extension method that converts collection of business objects to ASP.NET MVC select list items without writing a lot of code. BusinessBase, BaseEntity and other base classes I prefer to have some base class for all my business classes so I can easily use them regardless of their type in contexts I need. NB! Some guys say that it is good idea to have base class for all your business classes and they also suggest you to have mappings done same way in database. Other guys say that it is good to have base class but you don’t have to have one master table in database that contains identities of all your business objects. It is up to you how and what you prefer to do but whatever you do – think and analyze first, please. :) To keep things maximally simple I will use very primitive base class in this example. This class has only Id property and that’s it. public class BaseEntity {     public virtual long Id { get; set; } } Now we have Id in base class and we have one more question to solve – how to better visualize our business objects? To users ID is not enough, they want something more informative. We can define some abstract property that all classes must implement. But there is also another option we can use – overriding ToString() method in our business classes. public class Product : BaseEntity {     public virtual string SKU { get; set; }     public virtual string Name { get; set; }       public override string ToString()     {         if (string.IsNullOrEmpty(Name))             return base.ToString();           return Name;     } } Although you can add more functionality and properties to your base class we are at point where we have what we needed: identity and human readable presentation of business objects. Writing list items converter Now we can write method that creates list items for us. public static class BaseEntityExtensions {            public static IEnumerable<SelectListItem> ToSelectListItems<T>         (this IList<T> baseEntities) where T : BaseEntity     {         return ToSelectListItems((IEnumerator<BaseEntity>)                    baseEntities.GetEnumerator());     }       public static IEnumerable<SelectListItem> ToSelectListItems         (this IEnumerator<BaseEntity> baseEntities)     {         var items = new HashSet<SelectListItem>();           while (baseEntities.MoveNext())         {             var item = new SelectListItem();             var entity = baseEntities.Current;               item.Value = entity.Id.ToString();             item.Text = entity.ToString();               items.Add(item);         }           return items;     } } You can see here to overloads of same method. One works with List<T> and the other with IEnumerator<BaseEntity>. Although mostly my repositories return IList<T> when querying data there are always situations where I can use more abstract types and interfaces. Using extension methods in code In your code you can use ToSelectListItems() extension methods like shown on following code fragment. ... var model = new MyFormModel(); model.Statuses = _myRepository.ListStatuses().ToSelectListItems(); ... You can call this method on all your business classes that extend your base entity. Wanna have some fun with this code? Write overload for extension method that accepts selected item ID.

    Read the article

  • Building InstallShield based Installers using Team Build 2010

    - by jehan
    Last few weeks, I have been working on Application Packaging stuff using all the widely used tools like InstallShield, WISE, WiX and Visual Studio Installer. So, I thought it would be good to post about how to Build the Installers developed using these tools with Team Build 2010. This post will focus on how to build the InstallShield generated packages using Team Build 2010. For the release of VS2010, Microsoft has partnered with Flexera who are the makers of InstallShield to create InstallShield Limited Edition, especially for the customers of Visual Studio. First Microsoft planned to release WiX (Windows Installer Xml) with VS2010, but later Microsoft dropped  WiX from VS2010 due to reasons which are best known to them and partnered with InstallShield for Limited Edition. It disappointed lot of people because InstallShield Limited Edition provides only few features of InstallShield and it may not feasable to build complex installer packages using this and it also requires License, where as WiX is an open source with no license costs and it has proved efficient in building most complex packages. Only the last three features are available in InstallShield Limited Edition from the total features offered by InstallShield as shown in below list.                                                                                            Feature Limited Edition for Visual Studio 2010 Standalone Build System Maintain a clean build machine by using only the part of InstallShield that compiles the installations. InstallShield Best Practices Validation Suite Avoid common installation issues. Try and Die Functionality RCreate a fully functional trial version of your product. InstallShield Repackager Create Windows Installer setups from any legacy installation. Multilingual Support Present installation text in up to 35 languages. Microsoft App-V™ Support Deploy your applications as App-V virtual packages that run without conflict. Industry-Standard InstallScript Achieve maximum flexibility in your installations. Dialog Editor Modify the layout of existing end-user dialogs, create new custom dialogs, and more. Patch Creation Build updates and patches for your products. Setup Prerequisite Editor Easily control prerequisite restart behavior and source locations. String Editor View Control the localizable text strings displayed at run time with this spreadsheet-like table. Text File Changes View Configure search-and-replace actions for content in text files to be modified at run time. Virtual Machine Detection Block your installations from running on virtual machines. Unicode Support Improve multi-language installation development. Support for 64-Bit COM Extraction Extract COM data from a 64-bit COM server. Windows Installer Installation Chaining Add MSI packages to your main installation and chain them together. XML Support Save time by quickly testing XML configuration changes to installation projects. Billboard Support for Custom Branding Display Adobe Flash billboards and other graphic files during the install process. SaaS Support (IIS 7 and SSL Technologies) Easily deploy Windows-based Web applications. Project Assistant Jumpstart a project by using a simplified set of views. Support for Digital Signatures Save time by digitally signing all your files at build time. Easily Run Custom Actions Schedule a custom action to run at precisely the right moment in your installation. Installation Prerequisites Check for and install prerequisites before your installation is executed. To create a InstallShield project in Visual Studio and Build it using Team Build 2010, first you have to add the InstallShield Project template  to your Solution file. If you want to use InstallShield Limited edition you can add it from FileàNewà project àother Project Types àSetup and Deploymentà InstallShield LE and if you are using other versions of InstallShield, then you have to add it from  from FileàNewà project àInstallShield Projects. Here, I’m using  InstallShield 2011 Premier edition as I already have it Installed. I have created a simple package for TailSpin Application which has a Feature called Web, few components and a IIS Web Site for  TailSpin application.   Before started working on this, I thought I may need to build the package by calling invoke process activity in build process template or have to create a new custom activity. But, it got build without any changes to build process template. But, it was failing with below error message. C:\Program Files (x86)\MSBuild\InstallShield\2011\InstallShield.targets (68): The "InstallShield.Tasks.InstallShield" task could not be loaded from the assembly C:\Program Files (x86)\MSBuild\InstallShield\2010Limited\InstallShield.Tasks.dll. Could not load file or assembly 'file:///C:\Program Files(x86)\MSBuild\InstallShield\2011\InstallShield.Tasks.dll' or one of its dependencies. An attempt was made to load a program with an incorrect format. Confirm that the <UsingTask> declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build.Framework.ITask. This error is due to 64-bit build machine which I’m using. This issue will be replicable if you are queuing a build on a 64-bit build machine. To avoid this you have to ensure that you configured the build definition for your InstallShield project to load the InstallShield.Tasks.dll file (which is a 32-bit file); otherwise, you will encounter this build error informing you that the InstallShield.Tasks.dll file could not be loaded. To select the 32-bit version of MSBuild, click the Process tab of your build definition in Team Explorer. Then, under the Advanced node, find the MSBuild Platform setting, and select x86. Note that if you are using a 32-bit build machine, you can select either Auto or x86 for the MSBuild Platform setting.  Once I did above changes, the build got successful.

    Read the article

  • texture mapping with lib3ds and SOIL help

    - by Adam West
    I'm having trouble with my project for loading a texture map onto a model. Any insight into what is going wrong with my code is fantastic. Right now the code only renders a teapot which I have assinged after creating it in 3DS Max. 3dsloader.cpp #include "3dsloader.h" Object::Object(std:: string filename) { m_TotalFaces = 0; m_model = lib3ds_file_load(filename.c_str()); // If loading the model failed, we throw an exception if(!m_model) { throw strcat("Unable to load ", filename.c_str()); } // set properties of texture coordinate generation for both x and y coordinates glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR); glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR); // if not already enabled, enable texture generation if(! glIsEnabled(GL_TEXTURE_GEN_S)) glEnable(GL_TEXTURE_GEN_S); if(! glIsEnabled(GL_TEXTURE_GEN_T)) glEnable(GL_TEXTURE_GEN_T); } Object::~Object() { if(m_model) // if the file isn't freed yet lib3ds_file_free(m_model); //free up memory glDisable(GL_TEXTURE_GEN_S); glDisable(GL_TEXTURE_GEN_T); } void Object::GetFaces() { m_TotalFaces = 0; Lib3dsMesh * mesh; // Loop through every mesh. for(mesh = m_model->meshes;mesh != NULL;mesh = mesh->next) { // Add the number of faces this mesh has to the total number of faces. m_TotalFaces += mesh->faces; } } void Object::CreateVBO() { assert(m_model != NULL); // Calculate the number of faces we have in total GetFaces(); // Allocate memory for our vertices and normals Lib3dsVector * vertices = new Lib3dsVector[m_TotalFaces * 3]; Lib3dsVector * normals = new Lib3dsVector[m_TotalFaces * 3]; Lib3dsTexel* texCoords = new Lib3dsTexel[m_TotalFaces * 3]; Lib3dsMesh * mesh; unsigned int FinishedFaces = 0; // Loop through all the meshes for(mesh = m_model->meshes;mesh != NULL;mesh = mesh->next) { lib3ds_mesh_calculate_normals(mesh, &normals[FinishedFaces*3]); // Loop through every face for(unsigned int cur_face = 0; cur_face < mesh->faces;cur_face++) { Lib3dsFace * face = &mesh->faceL[cur_face]; for(unsigned int i = 0;i < 3;i++) { memcpy(&texCoords[FinishedFaces*3 + i], mesh->texelL[face->points[ i ]], sizeof(Lib3dsTexel)); memcpy(&vertices[FinishedFaces*3 + i], mesh->pointL[face->points[ i ]].pos, sizeof(Lib3dsVector)); } FinishedFaces++; } } // Generate a Vertex Buffer Object and store it with our vertices glGenBuffers(1, &m_VertexVBO); glBindBuffer(GL_ARRAY_BUFFER, m_VertexVBO); glBufferData(GL_ARRAY_BUFFER, sizeof(Lib3dsVector) * 3 * m_TotalFaces, vertices, GL_STATIC_DRAW); // Generate another Vertex Buffer Object and store the normals in it glGenBuffers(1, &m_NormalVBO); glBindBuffer(GL_ARRAY_BUFFER, m_NormalVBO); glBufferData(GL_ARRAY_BUFFER, sizeof(Lib3dsVector) * 3 * m_TotalFaces, normals, GL_STATIC_DRAW); // Generate a third VBO and store the texture coordinates in it. glGenBuffers(1, &m_TexCoordVBO); glBindBuffer(GL_ARRAY_BUFFER, m_TexCoordVBO); glBufferData(GL_ARRAY_BUFFER, sizeof(Lib3dsTexel) * 3 * m_TotalFaces, texCoords, GL_STATIC_DRAW); // Clean up our allocated memory delete vertices; delete normals; delete texCoords; // We no longer need lib3ds lib3ds_file_free(m_model); m_model = NULL; } void Object::applyTexture(const char*texfilename) { float imageWidth; float imageHeight; glGenTextures(1, & textureObject); // allocate memory for one texture textureObject = SOIL_load_OGL_texture(texfilename,SOIL_LOAD_AUTO,SOIL_CREATE_NEW_ID,SOIL_FLAG_MIPMAPS); glPixelStorei(GL_UNPACK_ALIGNMENT,1); glBindTexture(GL_TEXTURE_2D, textureObject); // use our newest texture glGetTexLevelParameterfv(GL_TEXTURE_2D,0,GL_TEXTURE_WIDTH,&imageWidth); glGetTexLevelParameterfv(GL_TEXTURE_2D,0,GL_TEXTURE_HEIGHT,&imageHeight); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // give the best result for texture magnification glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); //give the best result for texture minification glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); // don't repeat texture glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); // don't repeat textureglTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); // don't repeat texture glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_MODULATE); glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,imageWidth,imageHeight,0,GL_RGB,GL_UNSIGNED_BYTE,& textureObject); } void Object::Draw() const { // Enable vertex, normal and texture-coordinate arrays. glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_NORMAL_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); // Bind the VBO with the normals. glBindBuffer(GL_ARRAY_BUFFER, m_NormalVBO); // The pointer for the normals is NULL which means that OpenGL will use the currently bound VBO. glNormalPointer(GL_FLOAT, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, m_TexCoordVBO); glTexCoordPointer(2, GL_FLOAT, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, m_VertexVBO); glVertexPointer(3, GL_FLOAT, 0, NULL); // Render the triangles. glDrawArrays(GL_TRIANGLES, 0, m_TotalFaces * 3); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_NORMAL_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); } 3dsloader.h #include "main.h" #include "lib3ds/file.h" #include "lib3ds/mesh.h" #include "lib3ds/material.h" class Object { public: Object(std:: string filename); virtual ~Object(); virtual void Draw() const; virtual void CreateVBO(); void applyTexture(const char*texfilename); protected: void GetFaces(); unsigned int m_TotalFaces; Lib3dsFile * m_model; Lib3dsMesh* Mesh; GLuint textureObject; GLuint m_VertexVBO, m_NormalVBO, m_TexCoordVBO; }; Called in the main cpp file with: VBO,apply texture and draw (pretty simple, how ironic) and thats it, please help me forum :)

    Read the article

  • A new SQL, a new Analysis Services, a new Workshop! #ssas #sql2012

    - by Marco Russo (SQLBI)
    One week ago Microsoft SQL Server 2012 finally debuted with a virtual launch event and you can find many intro sessions there (20 minutes each). There is a lot of new content available if you want to learn more about SQL 2012 and in this blog post I’d like to provide a few link to sessions, documents, bits and courses that are available now or very soon. First of all, the release of Analysis Services 2012 has finally released PowerPivot 2012 (many of us called it PowerPivot v2 before this official name) and also the new Data Mining Add-in for Microsoft Office 2010, now available also for Excel 64bit! And, of course, don’t miss the Microsoft SQL Server 2012 Feature Pack, there are a lot of upgrades for both DBAs and developers. I just discovered there is a new LocalDB version of SQL Express that can run in user mode without any setup. Is this the end of SQL CE? But now, back to Analysis Services: if you want some tutorial on Tabular, the Microsoft Virtual Academy has a whole track dedicated to Analysis Services 2012 but you will probably be interested also in the one about Reporting Services 2012. If you think that virtual is good but it’s not enough, there are plenty of conferences in the coming months – these are just those where I and Alberto will deliver some SSAS Tabular presentations: SQLBits X, London, March 29-31, 2012: if you are in London or want a good reason to go, this is the most important SQL Server event in Europe this year, no doubts about it. And not only because of the high number of attendees, but also because there is an impressive number of speakers (excluding me, of course) coming from all over the world. This is an event second only to PASS Summit in Seattle so there are no good reasons to not attend it. Microsoft SQL Server & Business Intelligence Conference 2012, Milan, March 28-29, 2012: this is an Italian conference so the language might be a barrier, but many of us also speak English and the food is good! Just a few seats still available. TechEd North America, Orlando, June 11-14, 2012: you know, this is a big event and it contains everything – if you want to spend a whole day learning the SSAS Tabular model with me and Alberto, don’t miss our pre-conference day “Using BISM Tabular in Microsoft SQL Server Analysis Services 2012” (be careful, it is on June 10, a nice study-Sunday!). TechEd Europe, Amsterdam, June 26-29, 2012: the European version of TechEd provides almost the same content and you don’t have to go overseas. We also run the same pre-conference day “Using BISM Tabular in Microsoft SQL Server Analysis Services 2012” (in this case, it is on June 25, that’s a regular Monday). I and Alberto will also speak at some user group meeting around Europe during… well, we’re going to travel a lot in the next months. In fact, if you want to get a complete training on SSAS Tabular, you should spend two days with us in one of our SSAS Tabular Workshop! We prepared a 2-day seminar, a very intense one, that start from the simple tabular modeling and cover architecture, DAX, query, advanced modeling, security, deployment, optimization, monitoring, relationships with PowerPivot and Multidimensional… Really, there are a lot of stuffs here! We announced the first dates in Europe and also an online edition optimized for America’s time zone: Apr 16-17, 2012 – Amsterdam, Netherlands Apr 26-27, 2012 – Copenhagen, Denmark May 7-8, 2012 – Online for America’s time zone May 14-15, 2012 – Brussels, Belgium May 21-22, 2012 – Oslo, Norway May 24-25, 2012 – Stockholm, Sweden May 28-29, 2012 – London, United Kingdom May 31-Jun 1, 2012 – Milan, Italy (Italian language) Also Chris Webb will join us in this workshop and in every date you can find who is the speaker on the web site. The course is based on our upcoming book, almost 600 pages (!) about SSAS Tabular, an incredible effort that will be available very soon in a preview (rough cuts from O’Reilly) and will be on the shelf in May. I will provide a link to order it as soon as we have one! And if you think that this is not enough… you’re right! Do you know what is the only thing you can do to optimize your Tabular model? Optimize your DAX code. Learning DAX is easy, mastering DAX requires some knowledge… and our DAX Advanced Workshop will provide exactly the required content. Public classes will be available later this year, by now we just deliver it on demand.

    Read the article

  • A new SQL, a new Analysis Services, a new Workshop! #ssas #sql2012

    - by Marco Russo (SQLBI)
    One week ago Microsoft SQL Server 2012 finally debuted with a virtual launch event and you can find many intro sessions there (20 minutes each). There is a lot of new content available if you want to learn more about SQL 2012 and in this blog post I’d like to provide a few link to sessions, documents, bits and courses that are available now or very soon. First of all, the release of Analysis Services 2012 has finally released PowerPivot 2012 (many of us called it PowerPivot v2 before this official name) and also the new Data Mining Add-in for Microsoft Office 2010, now available also for Excel 64bit! And, of course, don’t miss the Microsoft SQL Server 2012 Feature Pack, there are a lot of upgrades for both DBAs and developers. I just discovered there is a new LocalDB version of SQL Express that can run in user mode without any setup. Is this the end of SQL CE? But now, back to Analysis Services: if you want some tutorial on Tabular, the Microsoft Virtual Academy has a whole track dedicated to Analysis Services 2012 but you will probably be interested also in the one about Reporting Services 2012. If you think that virtual is good but it’s not enough, there are plenty of conferences in the coming months – these are just those where I and Alberto will deliver some SSAS Tabular presentations: SQLBits X, London, March 29-31, 2012: if you are in London or want a good reason to go, this is the most important SQL Server event in Europe this year, no doubts about it. And not only because of the high number of attendees, but also because there is an impressive number of speakers (excluding me, of course) coming from all over the world. This is an event second only to PASS Summit in Seattle so there are no good reasons to not attend it. Microsoft SQL Server & Business Intelligence Conference 2012, Milan, March 28-29, 2012: this is an Italian conference so the language might be a barrier, but many of us also speak English and the food is good! Just a few seats still available. TechEd North America, Orlando, June 11-14, 2012: you know, this is a big event and it contains everything – if you want to spend a whole day learning the SSAS Tabular model with me and Alberto, don’t miss our pre-conference day “Using BISM Tabular in Microsoft SQL Server Analysis Services 2012” (be careful, it is on June 10, a nice study-Sunday!). TechEd Europe, Amsterdam, June 26-29, 2012: the European version of TechEd provides almost the same content and you don’t have to go overseas. We also run the same pre-conference day “Using BISM Tabular in Microsoft SQL Server Analysis Services 2012” (in this case, it is on June 25, that’s a regular Monday). I and Alberto will also speak at some user group meeting around Europe during… well, we’re going to travel a lot in the next months. In fact, if you want to get a complete training on SSAS Tabular, you should spend two days with us in one of our SSAS Tabular Workshop! We prepared a 2-day seminar, a very intense one, that start from the simple tabular modeling and cover architecture, DAX, query, advanced modeling, security, deployment, optimization, monitoring, relationships with PowerPivot and Multidimensional… Really, there are a lot of stuffs here! We announced the first dates in Europe and also an online edition optimized for America’s time zone: Apr 16-17, 2012 – Amsterdam, Netherlands Apr 26-27, 2012 – Copenhagen, Denmark May 7-8, 2012 – Online for America’s time zone May 14-15, 2012 – Brussels, Belgium May 21-22, 2012 – Oslo, Norway May 24-25, 2012 – Stockholm, Sweden May 28-29, 2012 – London, United Kingdom May 31-Jun 1, 2012 – Milan, Italy (Italian language) Also Chris Webb will join us in this workshop and in every date you can find who is the speaker on the web site. The course is based on our upcoming book, almost 600 pages (!) about SSAS Tabular, an incredible effort that will be available very soon in a preview (rough cuts from O’Reilly) and will be on the shelf in May. I will provide a link to order it as soon as we have one! And if you think that this is not enough… you’re right! Do you know what is the only thing you can do to optimize your Tabular model? Optimize your DAX code. Learning DAX is easy, mastering DAX requires some knowledge… and our DAX Advanced Workshop will provide exactly the required content. Public classes will be available later this year, by now we just deliver it on demand.

    Read the article

  • In the Groove: PASS Board Year 1, Q3

    - by Denise McInerney
    It's nine months into my first year on the PASS Board and I feel like I've found my rhythm. I've accomplished one of the goals I set out for the year and have made progress on others. Here's a recap of the last few months. Anti-Harassment Policy & Process Completed In April I began work on a Code of Conduct for the PASS Summit. The Board had several good discussions and various PASS members provided feedback. You can read more about that in this blog post. Since the document was focused on issues of harassment we renamed it the "Anti-Harassment Policy " and it was approved by the Board in August. The next step was to refine the guideliness and process for enforcement of the AHP. A subcommittee worked on this and presented an update to the Board at the September meeting. You can read more about that in this post, and you can find the process document here. Global Growth Expanding PASS' reach and making the organization relevant to SQL Server communities around the world has been a focus of the Board's work in 2012. We took the Global Growth initiative out to the community for feedback, and everyone on the Board participated, via Twitter chats, Town Hall meetings, feedback forums and in-person discussions. This community participation helped shape and refine our plans. Implementing the vision for Global Growth goes across all portfolios. The Virtual Chapters are well-positioned to help the organization move forward in this area. One outcome of the Global Growth discussions with the community is the expansion of two of the VCs from country-specific to language-specific. Thanks to the leadership in Brazil & Mexico for taking the lead here. I look forward to continued success for the Portuguese- and Spanish-language Virtual Chapters. Together with the Global Chinese VC PASS is off to a good start in making the VC's truly global. Virtual Chapters The VCs continue to grow and expand. Volunteers recently rebooted the Azure and Virutalization VCs, and a new  Education VC will be launching soon. Every week VCs offer excellent free training on a variety of topics. It's the dedication of the VC leaders and volunteers that make all this possible and I thank them for it. Board meeting The Board had an in-person meeting in September in San Diego, CA.. As usual we covered a number of topics including governance changes to support Global Growth, the upcoming Summit, 2013 events and the (then) upcoming PASS election. Next Up Much of the last couple of months has been focused on preparing for the PASS Summit in Seattle Nov. 6-9. I'll be there all week;  feel free to stop me if you have a question or concern, or just to introduce yourself.  Here are some of the places you can find me: VC Leaders Meeting Tuesday 8:00 am the VC leaders will have a meeting. We'll review some of the year's highlights and talk about plans for the next year Welcome Reception The VCs will be at the Welcome Reception in the new VC Lounge. Come by, learn more about what the VCs have to offer and meet others who share your interests. Exceptional DBA Awards Party I'm looking forward to seeing PASS Women in Tech VC leader Meredith Ryan receive her award at this event sponsored by Red Gate Session Presentation I will be presenting a spotlight session entitled "Stop Bad Data in Its OLTP Tracks" on Wednesday at 3:00 p.m. Exhibitor Reception This reception Wednesday evening in the Expo Hall is a great opportunity to learn more about tools and solutions that can help you in your job. Women in Tech Luncheon This year marks the 10th WIT Luncheon at PASS. I'm honored to be on the panel with Stefanie Higgins, Kevin Kline, Kendra Little and Jen Stirrup. This event is on Thursday at 11:30. Community Appreciation Party Thursday evening don't miss this event thanking all of you for everthing you do for PASS and the community. This year we will be at the Experience Music Project and it promises to be a fun party. Board Q & A Friday  9:45-11:15  am the members of the Board will be available to answer your questions. If you have a question for us, or want to hear what other members are thinking about, come by room 401 Friday morning.

    Read the article

  • Can't remove the libpcap0.8 package

    - by Yogesh
    I am getting error when running apt-get remove root@System:~/Downloads# sudo apt-get remove The following packages have unmet dependencies: libpcap0.8 : Breaks: libpcap0.8:i386 (!= 1.4.0-2) but 1.5.3-2 is installed libpcap0.8:i386 : Breaks: libpcap0.8 (!= 1.5.3-2) but 1.4.0-2 is installed libpcap0.8-dev : Depends: libpcap0.8 (= 1.5.3-2) but 1.4.0-2 is installed E: Unmet dependencies. Try using -f. and when I ran apt-get remove -f this is what happens: root@System:~/Downloads# sudo apt-get remove -f The following extra packages will be installed: libpcap0.8 The following packages will be upgraded: libpcap0.8 1 upgraded, 0 newly installed, 0 to remove and 365 not upgraded. 2 not fully installed or removed. Need to get 0 B/110 kB of archives. After this operation, 13.3 kB of additional disk space will be used. Do you want to continue? [Y/n] y (Reading database ... 163539 files and directories currently installed.) Preparing to unpack .../libpcap0.8_1.5.3-2_amd64.deb ... Unpacking libpcap0.8:amd64 (1.5.3-2) over (1.4.0-2) ... dpkg: error processing archive /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb (--unpack): trying to overwrite shared '/usr/share/man/man7/pcap-filter.7.gz', which is different from other instances of package libpcap0.8:amd64 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) root@System:~/Downloads# clear root@System:~/Downloads# sudo apt-get remove -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libpcap0.8 The following packages will be upgraded: libpcap0.8 1 upgraded, 0 newly installed, 0 to remove and 365 not upgraded. 2 not fully installed or removed. Need to get 0 B/110 kB of archives. After this operation, 13.3 kB of additional disk space will be used. Do you want to continue? [Y/n] y (Reading database ... 163539 files and directories currently installed.) Preparing to unpack .../libpcap0.8_1.5.3-2_amd64.deb ... Unpacking libpcap0.8:amd64 (1.5.3-2) over (1.4.0-2) ... dpkg: error processing archive /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb (--unpack): trying to overwrite shared '/usr/share/man/man7/pcap-filter.7.gz', which is different from other instances of package libpcap0.8:amd64 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) root@System:~/Downloads# root@System:~/Downloads# sudo apt-get check Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: libpcap0.8 : Breaks: libpcap0.8:i386 (!= 1.4.0-2) but 1.5.3-2 is installed libpcap0.8:i386 : Breaks: libpcap0.8 (!= 1.5.3-2) but 1.4.0-2 is installed libpcap0.8-dev : Depends: libpcap0.8 (= 1.5.3-2) but 1.4.0-2 is installed E: Unmet dependencies. Try using -f. root@System:~/Downloads# apt-cache policy libpcap0.8:amd64 libpcap0.8 libpcap0.8-dev libpcap0.8: Installed: 1.4.0-2 Candidate: 1.5.3-2 Version table: 1.5.3-2 0 500 http://in.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages *** 1.4.0-2 0 100 /var/lib/dpkg/status libpcap0.8: Installed: 1.4.0-2 Candidate: 1.5.3-2 Version table: 1.5.3-2 0 500 http://in.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages *** 1.4.0-2 0 100 /var/lib/dpkg/status libpcap0.8-dev: Installed: 1.5.3-2 Candidate: 1.5.3-2 Version table: *** 1.5.3-2 0 500 http://in.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages 100 /var/lib/dpkg/status root@System:~/Downloads# root@System:~/Downloads# sudo apt-get -f remove libpcap0.8 libpcap0.8-dev libpcap0.8-dev:i386 libpcap0.8:i386 Reading package lists... Done Building dependency tree Reading state information... Done Package 'libpcap0.8-dev:i386' is not installed, so not removed. Did you mean 'libpcap0.8-dev'? You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: libpcap-dev : Depends: libpcap0.8-dev but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). root@System:~/Downloads# sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libpcap0.8 The following packages will be upgraded: libpcap0.8 1 upgraded, 0 newly installed, 0 to remove and 365 not upgraded. 2 not fully installed or removed. Need to get 0 B/110 kB of archives. After this operation, 13.3 kB of additional disk space will be used. Do you want to continue? [Y/n] y (Reading database ... 163539 files and directories currently installed.) Preparing to unpack .../libpcap0.8_1.5.3-2_amd64.deb ... Unpacking libpcap0.8:amd64 (1.5.3-2) over (1.4.0-2) ... dpkg: error processing archive /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb (--unpack): trying to overwrite shared '/usr/share/man/man7/pcap-filter.7.gz', which is different from other instances of package libpcap0.8:amd64 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) root@System:~/Downloads#

    Read the article

  • Infiniband: a highperformance network fabric - Part I

    - by Karoly Vegh
    Introduction:At the OpenWorld this year I managed to chat with interesting people again - one of them answering Infiniband deepdive questions with ease by coffee turned out to be one of Oracle's IB engineers, Ted Kim, who actually actively participates in the Infiniband Trade Association and integrates Oracle solutions with this highspeed network. This is why I love attending OOW. He granted me an hour of his time to talk about IB. This post is mostly based on that tech interview.Start of the actual post: Traditionally datatransfer between servers and storage elements happens in networks with up to 10 gigabit/seconds or in SANs with up to 8 gbps fiberchannel connections. Happens. Well, data rather trickles through.But nowadays data amounts grow well over the TeraByte order of magnitude, and multisocket/multicore/multithread Servers hunger data that these transfer technologies just can't deliver fast enough, causing all CPUs of this world do one thing at the same speed - waiting for data. And once again, I/O is the bottleneck in computing. FC and Ethernet can't keep up. We have half-TB SSDs, dozens of TB RAM to store data to be modified in, but can't transfer it. Can't backup fast enough, can't replicate fast enough, can't synchronize fast enough, can't load fast enough. The bad news is, everyone is used to this, like back in the '80s everyone was used to start compile jobs and go for a coffee. Or on vacation. The good news is, there's an alternative. Not so-called "bleeding-edge" 8gbps, but (as of now) 56. Not layers of overhead, but low latency. And it is available now. It has been for a while, actually. Welcome to the world of Infiniband. Short history:Infiniband was born as a result of joint efforts of HPAQ, IBM, Intel, Sun and Microsoft. They planned to implement a next-generation I/O fabric, in the 90s. In the 2000s Infiniband (from now on: IB) was quite popular in the high-performance computing field, powering most of the top500 supercomputers. Then in the middle of the decade, Oracle realized its potential and used it as an interconnect backbone for the first Database Machine, the first Exadata. Since then, IB has been booming, Oracle utilizes and supports it in a large set of its HW products, it is the backbone of the famous Engineered Systems: Exadata, SPARC SuperCluster, Exalogic, OVCA and even the new DB backup/recovery box. You can also use it to make servers talk highspeed IP to eachother, or to a ZFS Storage Appliance. Following Oracle's lead, even IBM has jumped the wagon, and leverages IB in its PureFlex systems, their first InfiniBand Machines.IB Structural Overview: If you want to use IB in your servers, the first thing you will need is PCI cards, in IB terms Host Channel Adapters, or HCAs. Just like NICs for Ethernet, or HBAs for FC. In these you plug an IB cable, going to an IB switch providing connection to other IB HCAs. Of course you're going to need drivers for those in your OS. Yes, these are long-available for Solaris and Linux. Now, what protocols can you talk over IB? There's a range of choices. See, IB isn't accepting package loss like Ethernet does, and hence doesn't need to rely on TCP/IP as a workaround for resends. That is, you still can run IP over IB (IPoIB), and that is used in various cases for control functionality, but the datatransfer can run over more efficient protocols - like native IB. About PCI connectivity: IB cards, as you see are fast. They bring low latency, which is just as important as their bandwidth. Current IB cards run at 56 gbit/s. That is slightly more than double of the capacity of a PCI Gen2 slot (of ~25 gbit/s). And IB cards are equipped usually with two ports - that is, altogether you'd need 112 gbit/s PCI slots, to be able to utilize FDR IB cards in an active-active fashion. PCI Gen3 slots provide you with around ~50gbps. This is why the most IB cards are configured in an active-standby way if both ports are used. Once again the PCI slot is the bottleneck. Anyway, the new Oracle servers are equipped with Gen3 PCI slots, an the new IB HCAs support those too. Oracle utilizes the QDR HCAs, running at 40gbp/s brutto, which translates to a 32gbp/s net traffic due to the 10:8 signal-to-data information ratio. Consolidation techniques: Technology never stops to evolve. Mellanox is working on the 100 gbps (EDR) version already, which will be optical, since signal technology doesn't allow EDR to be copper. Also, I hear you say "100gbps? I will never use/need that much". Are you sure? Have you considered consolidation scenarios, where (for example with Oracle Virtual Network) you could consolidate your platform to a high densitiy virtualized solution providing many virtual 10gbps interfaces through that 100gbps? Technology never stops to evolve. I still remember when a 10mbps network was impressively fast. Back in those days, 16MB of RAM was a lot. Now we usually run servers with around 100.000 times more RAM. If network infrastrucure speends could grow as fast as main memory capacities, we'd have a different landscape now :) You can utilize SRIOV as well for consolidation. That is, if you run LDoms (aka Oracle VM Server for SPARC) you do not have to add physical IB cards to all your guest LDoms, and you do not need to run VIO devices through the hypervisor either (avoiding overhead). You can enable SRIOV on those IB cards, which practically virtualizes the PCI bus, and you can dedicate Physical- and Virtual Functions of the virtualized HCAs as native, physical HW devices to your guests. See Raghuram's excellent post explaining SRIOV. SRIOV for IB is supported since LDoms 3.1.  This post is getting lengthier, so I will rename it to Part I, and continue it in a second post. 

    Read the article

< Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >