Search Results

Search found 13710 results on 549 pages for 'relational model'.

Page 449/549 | < Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >

  • How to avoid oscillation by async event based systems?

    - by inf3rno
    Imagine a system where there are data sources which need to be kept in sync. A simple example is model - view data binding by MVC. Now I intend to describe these kind of systems with data sources and hubs. Data sources are publishing and subscribing for events and hubs are relaying events to data sources. By handling an event a data source will change it state described in the event. By publishing an event the data source puts its current state to the event, so other data sources can use that information to change their state accordingly. The only problem with this system, that events can be reflected from the hub or from the other data sources, and that can put the system into an infinite oscillation (by async or infinite loop by sync). For example A -- data source B -- data source H -- hub A -> H -> A -- reflection from the hub A -> H -> B -> H -> A -- reflection from another data source By sync it is relatively easy to solve this issue. You can compare the current state with the event, and if they are equal, you don't change the state and raise the same event again. By async I could not find a solution yet. The state comparison does not work by async event handling because there is eventual consistency, and new events can be published in an inconsistent state causing the same oscillation. For example: A(*->x) -> H -> B(y->x) -- can go parallel with B(*->y) -> H -> A(x->y) -- so first A changes to x state while B changes to y state -- then B changes to x state while A changes to y state -- and so on for eternity... What do you think is there an algorithm to solve this problem? If there is a solution, is it possible to extend it to prevent oscillation caused by multiple hubs, multiple different events, etc... ? update: I don't think I can make this work without a lot of effort. I think this problem is just the same as we have by syncing multiple databases in a distributed system. So I think what I really need is constraints if I want to prevent this problem in an automatic way. What constraints do you suggest?

    Read the article

  • What ever happened to the Defense Software Reuse System (DSRS)?

    - by emddudley
    I've been reading some papers from the early 90s about a US Department of Defense software reuse initiative called the Defense Software Reuse System (DSRS). The most recent mention of it I could find was in a paper from 2000 - A Survey of Software Reuse Repositories Defense Software Repository System (DSRS) The DSRS is an automated repository for storing and retrieving Reusable Software Assets (RSAs) [14]. The DSRS software now manages inventories of reusable assets at seven software reuse support centers (SRSCs). The DSRS serves as a central collection point for quality RSAs, and facilitates software reuse by offering developers the opportunity to match their requirements with existing software products. DSRS accounts are available for Government employees and contractor personnel currently supporting Government projects... ...The DoD software community is trying to change its software engineering model from its current software cycle to a process-driven, domain-specific, architecture-based, repository-assisted way of constructing software [15]. In this changing environment, the DSRS has the highest potential to become the DoD standard reuse repository because it is the only existing deployed, operational repository with multiple interoperable locations across DoD. Seven DSRS locations support nearly 1,000 users and list nearly 9,000 reusable assets. The DISA DSRS alone lists 3,880 reusable assets and has 400 user accounts... The far-term strategy of the DSRS is to support a virtual repository. These interconnected repositories will provide the ability to locate and share reusable components across domains and among the services. An effective and evolving DSRS is a central requirement to the success of the DoD software reuse initiative. Evolving DoD repository requirements demand that DISA continue to have an operational DSRS site to support testing in an actual repository operation and to support DoD users. The classification process for the DSRS is a basic technology for providing customer support [16]. This process is the first step in making reusable assets available for implementing the functional and technical migration strategies. ... [14] DSRS - Defense Technology for Adaptable, Reliable Systems URL: http://ssed1.ims.disa.mil/srp/dsrspage.html [15] STARS - Software Technology for Adaptable, Reliable Systems URL: http://www.stars.ballston.paramax.com/index.html [16] D. E. Perry and S. S. Popovitch, “Inquire: Predicate-based use and reuse,'' in Proceedings of the 8th Knowledge-Based Software Engineering Conference, pp. 144-151, September 1993. ... Is DSRS dead, and were there any post-mortem reports on it? Are there other more-recent US government initiatives or reports on software reuse?

    Read the article

  • Weird Ubuntu Desktop Boot Partition On External Hard Drive

    - by Magnitus
    I have a Thinkpad with Windows 7. Last time I installed an Ubuntu/Windows dual boot, Windows was never same after and regularly got corrupted so this time, I installed Ubuntu on a separate external hard drive. I took a 500 GB external hard drive and used Windows to shrink the partition on it to 400 GB, freeing 100 GB to install Ubuntu. Then I modified the booting priority of my computer to boot from the external hard drive if present. Then, I installed Ubuntu desktop on the external hard drive using a DVD, picked the most simplistic partitioning scheme I could get away with (didn't go auto as it didn't include the external hard drive as a choice) and voilà. Fast forward some time and I'm trying to refresh my understanding of Linux partitions to install a bunch of servers, so I'm looking at the current partitioning scheme on my external hard drive and find the boot partition puzzling... sda is my integrated hard drive with Windows 7. sdb is my Ubuntu desktop external hard drive. Running parted on sdb, I get this: (parted) print Model: WD My Passport 0740 (scsi) Disk /dev/sdb: 500GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 393GB 393GB primary ntfs boot 2 393GB 500GB 107GB extended 5 393GB 425GB 32.8GB logical linux-swap(v1) 6 425GB 500GB 74.6GB logical ext4 At this point, I'm wondering why the ntfs partition is flagged as "boot" and not my ext4 partition which is the partition that contains / (and by extension, /boot since it's not on its own separate partition). Looking at mtab only confirms what I already know: eric@eric-ThinkPad-W530:~$ sudo cat /etc/mtab /dev/sdb6 / ext4 rw,errors=remount-ro 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/cgroup tmpfs rw 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 none /run/user tmpfs rw,noexec,nosuid,nodev,size=104857600,mode=0755 0 0 none /sys/fs/pstore pstore rw 0 0 systemd /sys/fs/cgroup/systemd cgroup rw,noexec,nosuid,nodev,none,name=systemd 0 0 gvfsd-fuse /run/user/1000/gvfs fuse.gvfsd-fuse rw,nosuid,nodev,user=eric 0 0 /dev/sdb1 /media/eric/My\040Passport fuseblk rw,nosuid,nodev,allow_other,default_permissions,blksize=4096 0 0 My lack of understanding concerning this is not vital to anything (this is only my development desktop partition), but somehow annoys me. Any insight that could shed some light on this would be welcome.

    Read the article

  • Rebuilding a Mac Mini (early 2009)

    - by Kelly Jones
    This weekend I decided to rebuild the family’s Mac Mini.  It’s the early 2009 model and I hadn’t done it since we got it in March of 2009.  Even worse, I had done the import data step (or whatever Apple calls it) which brought over all of the data files and apps from our previous Mac.  AND that install goes back to before 2005, as far as I can remember.  SO, to say that “cruft” had built up in the operating system, is probably a bit of an understatement. The rebuild went pretty smoothly, especially since I had a couple of spare hard drives.  I hooked up a spare USB drive and formatted it for use with the Mac.  I then used Carbon Copy to clone the internal hard drive onto the USB drive.  (Carbon Copy is a great little app that I used several years ago and I was happy to see it was not only still around, but updated as well.) Once I had my backup, I shut down the Mac and replaced the internal hard drive.  I had purchased the hard drive last fall to use with my work laptop, but I got a new work laptop (with awesome dual SSDs) so I wasn’t using it anymore.  The replacement drive (Seagate Momentus 7200.4 ST9500420AS 500GB 7200 RPM 2.5" SATA 3.0Gb/s Internal Notebook Hard Drive) has more than double the original’s capacity and is also faster.  I’ll have to keep an eye on the temperature, since that 7200 drive will run hotter. Opening the Mac Mini is not for the easily intimidated!  That cool little case is quite the pain to open.  Luckily, OWC put a video together here.  After replacing the drive, I then installed a clean copy of OS 10.5 using the DVDs that came with the Mac.  After the OS, it was time to reinstall the apps.  I downloaded some of the freeware, just to make sure I had the latest versions.  For the rest, I just copied from the backup cloned drive to the new drive.  (I love the way most Mac apps are written – with almost everything contained within a “package” that I can just copy from one drive to another.  MUCH better than the Windows way of using shared DLLs and the registry to store critical pieces that the app needs in order to run!) The whole process took longer than I would have preferred, but it was long overdue.  It definitely “feels” faster, especially boot time and application launches.

    Read the article

  • reference list for non-IT driven algorithmic patterns

    - by Quicker
    I am looking for a reference list for non-IT driven algorithmic patterns (which still can be helped with IT implementations of IT). An Example List would be: name; short desc; reference Travelling Salesman; find the shortest possible route on a multiple target path; http://en.wikipedia.org/wiki/Travelling_salesman_problem Ressource Disposition (aka Regulation); Distribute a limited/exceeding input on a given number output receivers based on distribution rules; http://database-programmer.blogspot.de/2010/12/critical-analysis-of-algorithm-sproc.html If there is no such list, but you instantly think of something specific, please 'put it on the desk'. Maybe I can compile something out of the input I get here (actually I am very frustrated as I did not find any such list via research by myself). Details on Scoping: I found it very hard to formulate what I want in a way everything is out that I do not need (which may be the issue why I did not find anything at google). There is a database centric definition for what I am looking for in the section 'Processes' of the second example link. That somehow fits, but the database focus sort of drifts away from the pattern thinking, which I have in mind. So here are my own thoughts around what's in and what's out: I am NOT looking for a foundational algo ref list, which is implemented as basis for any programming language. Eg. the php reference describes substr and strlen. That implements algos, but is not what I am looking for. the problem the algo does address would even exist, if there were no computers (or other IT components) the main focus of the algo is NOT to help other algo's chances are high, that there are implementions of the solution or any workaround without any IT support out there in the world however the algo could be benefitialy implemented/fully supported by a software application = means: the problem of the algo has to be addressed anyway, but running an algo implementation with software automates the process (that is why I posted it on stackoverflow and not somewhere else) typically such algo implementations have more than one input field value and more than one output field value - which implies it could not be implemented as simple function (which is fixed to produce not more than one output value) in a normalized data model often times such algo implementation outputs span accross multiple rows (sometimes multiple tables), whereby the number of output rows depends on the input paraters and rows in the table(s) at start time - which implies that any algo implementation/procedure must interact with a database (read and/or write) I am mainly looking for patterns, not for specific implementations. Example: The Travelling Salesman assumes any coordinates, however it does not say: You need a table targets with fields x and y. - however sometimes descriptions are focussed on examples with specific implementations very much - no worries, as long as the pattern gets clear

    Read the article

  • Finding Tools Guidance in OUM

    - by user716869
    OUM is not tool – specific. However, it does include tool guidance.  Tool guidance in OUM includes: a mention of a tool that could be used to complete a specific task(s) templates created with a specific tool example work products in a specific tool links to tool resources Tool Supplemental Guides So how do you find all this helpful tool information? Start at the lowest level first – the Task Overview.  Even though the task overviews are written tool-agnostic, they sometimes mention suggestions, or examples of a tool that might be used to complete the task.  More specific tool information can be found in the Task Overview, Templates and Tools section.  In some cases, the tool used to create the template (for example, Microsoft Word, Powerpoint, Project and Visio) is useful. The Templates and Tools section also provides more specific tool guidance, such as links to: White Papers Viewlets Example Work Products Additional Resources Tool Supplemental Guides If you’re more interested in seeing what tools might be helpful in general for your project or to see if there is any tool guidance for a specific tool that your project is committed to using, go to the Supplemental Guidance page in OUM.  This page is available from the Method Navigation pull down located in the header of almost every OUM page. When you open the Supplemental Guidance page, the first thing you see is a table index of everything that is included on the page.  At the top of the right column are all the Tool Supplemental Guides available in OUM.  Use the index to navigate to any of the guides. Next in the right column is Discipline/Industry/View Resources and Samples.  Use the index to navigate to any of these topics and see what’s available and more specifically, if there is any tool guidance available.  For example, if you navigate to the Cloud Resources, you will find a link to the IT Strategies from Oracle page that provides information for Cloud Practitioner Guides, Cloud Reference Architectures and Cloud White Papers, including the Cloud Candidate Selection Tool and Cloud Computing Maturity Model. The section for Method Tool and Technique Cross References can take you to the Task to Tool Cross Reference.  This page provides a task listing with possible helpful tools and links to more information regarding the tools.  By no means is this tool guidance all inclusive.  You can use other tools not mentioned in OUM to complete an OUM task. The Method Tool and Technique Cross References can also take you to the various Technique pages (Index and Cross References).  While techniques are not necessarily “tools,” they can certainly provide valuable assistance in completing tasks. In the Other Resources section of the Supplemental Guidance page, you find links to the viewlets and white papers that are included within OUM.

    Read the article

  • Is it reasonable to insist on reproducing every defect before diagnosing and fixing it?

    - by amphibient
    I work for a software product company. We have large enterprise customers who implement our product and we provide support to them. For example, if there is a defect, we provide patches, etc. In other words, It is a fairly typical setup. Recently, a ticket was issued and assigned to me regarding an exception that a customer found in a log file and that has to do with concurrent database access in a clustered implementation of our product. So the specific configuration of this customer may well be critical in the occurrence of this bug. All we got from the customer was their log file. The approach I proposed to my team was to attempt to reproduce the bug in a similar configuration setup as that of the customer and get a comparable log. However, they disagree with my approach saying that I should not need to reproduce the bug (as that is overly time-consuming and will require simulating a server cluster on VMs) and that I should simply "follow the code" to see where the thread- and/or transaction-unsafe code is and put the change working off of a simple local development, which is not a cluster implementation like the environment from which the occurrence of the bug originates. To me, working out of an abstract blueprint (program code) rather than a concrete, tangible, visible manifestation (runtime reproduction) seems like a difficult working environment (for a person of normal cognitive abilities and attention span), so I wanted to ask a general question: Is it reasonable to insist on reproducing every defect and debug it before diagnosing and fixing it? Or: If I am a senior developer, should I be able to read (multithreaded) code and create a mental picture of what it does in all use case scenarios rather than require to run the application, test different use case scenarios hands on, and step through the code line by line? Or am I a poor developer for demanding that kind of work environment? Is debugging for sissies? In my opinion, any fix submitted in response to an incident ticket should be tested in an environment simulated to be as close to the original environment as possible. How else can you know that it will really remedy the issue? It is like releasing a new model of a vehicle without crash testing it with a dummy to demonstrate that the air bags indeed work. Last but not least, if you agree with me: How should I talk with my team to convince them that my approach is reasonable, conservative and more bulletproof?

    Read the article

  • Why distant objects draw in front of close objects?

    - by cad
    I am rendering two cubes in the space using XNA 4.0 and the layering of objects only works from certain angles. Here is what I see from the front angle (everything ok) Here is what I see from behind This is my draw method. Cubes are drawn by serverManager and serverManager1 protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); switch (_gameStateFSM.State) { case GameFSMState.GameStateFSM.INTROSCREEN: spriteBatch.Begin(); introscreen.Draw(spriteBatch); spriteBatch.End(); break; case GameFSMState.GameStateFSM.GAME: spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend); // Text screenMessagesManager.Draw(spriteBatch, firstPersonCamera.cameraPosition, fpsHelper.framesPerSecond); // Camera firstPersonCamera.Draw(); // Servers serverManager.Draw(GraphicsDevice, firstPersonCamera.viewMatrix, firstPersonCamera.projMatrix); serverManager1.Draw(GraphicsDevice, firstPersonCamera.viewMatrix, firstPersonCamera.projMatrix); // Room //roomManager.Draw(GraphicsDevice, firstPersonCamera.viewMatrix); spriteBatch.End(); break; case GameFSMState.GameStateFSM.EXITGAME: break; default: break; } base.Draw(gameTime); fpsHelper.IncrementFrameCounter(); } serverManager and serverManager1 are instances of the same class ServerManager that draws a cube. The draw method for ServerManager is: public void Draw(GraphicsDevice graphicsDevice, Matrix viewMatrix, Matrix projectionMatrix) { cubeEffect.World = Matrix.CreateTranslation(modelPosition); // Set the World matrix which defines the position of the cube cubeEffect.View = viewMatrix; // Set the View matrix which defines the camera and what it's looking at cubeEffect.Projection = projectionMatrix; // Enable textures on the Cube Effect. this is necessary to texture the model cubeEffect.TextureEnabled = true; cubeEffect.Texture = cubeTexture; // Enable some pretty lights cubeEffect.EnableDefaultLighting(); // apply the effect and render the cube foreach (EffectPass pass in cubeEffect.CurrentTechnique.Passes) { pass.Apply(); cubeToDraw.RenderToDevice(graphicsDevice); } } Obviously there is something I am doing wrong. Any hint of where to look? (Maybe z-buffer or occlusion tests?)

    Read the article

  • How can I improve the battery life under 12.04 on my Inspiron 14z? [duplicate]

    - by cfogelberg
    This question already has an answer here: Tips to extend battery life for laptops and notebooks 24 answers How do I improve the battery life of my Inspiron 14z under Ubuntu 12.04? This laptop gets 4-5 hours of battery life using Windows (e.g. here). I've removed Windows, installed Ubuntu 12.04 and the initial battery life was only 2 hours. With some tweaks (described below) it's still only ~2.5 hours. For reference, the laptop is the latest model of the 14z: i5-3337U processor 32GB MSATA, 500GB HDD (5400rpm) AMD Radeon HD7570M graphics card I have put ext4 partitions on both the SSD and the HDD, and have mounted / to the SSD and /home to the HDD. I also put a 24gb linux swap partition at the start of the HDD, though I figure this won't be used all that much (the laptop has 8gb of RAM). After googling around and reading Ask Ubuntu and other sites extensively, I have done the following steps, and they have improved the battery life ~30 minutes (exact improvement not clear, but battery life is still nowhere near 4-5 hours). Installed Jupiter (and set Performance to "Power Saving") Installed laptop-mode-tools cat /proc/sys/vm/laptop_mode now outputs 5 (previously it output 0) But it's not clear that this will help: AskUbuntu question Turned down the brightness of my screen from full to 1/3 Other things I have heard about but have not tried for fear of frying the laptop or my linux install: Add "pcie_aspm=force" at the end of the line with "quiet splash" in /boot/grub/grub.cfg Enable ALPM, but it may already be enabled in 12.04? Enable i915 framebuffer compression Use a propietary driver for the graphics card? Turn off the graphics card? (what would happen if I relied on the internal Intel bridge?) Use TLP? Spin down the HDD more aggressively (howto, but I think laptop-mode-tools does this already) The only other thing I've noticed is that plastic just above the F5, F6 and F7 keys gets really hot. According to Jupiter my CPU temperature is only 69 celsius and the System Monitor shows CPU load at 7% so I don't think it's the CPU. Maybe it's the graphics card? Also, I've set up MongoDB and LAMP on the machine as well. When I run powertop MongoDB is high in the list, but I'm not sure if that's relevant to battery life because I'm not actually doing anything with MongoDB most of the time. Edit - Additional info as requested $ lspci -nnk | grep -iEA3 "(graphics|vga)" 00:02.0 VGA compatible controller [0300]: Intel Corporation Ivy Bridge Graphics Controller [8086:0166] (rev 09) Subsystem: Dell Device [1028:057f] Kernel driver in use: i915 Kernel modules: i915 -- 02:00.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI Thames [Radeon 7500M/7600M Series] [1002:6841] Subsystem: Dell Device [1028:057f] Kernel driver in use: radeon Kernel modules: radeon

    Read the article

  • IPv6, isn't it just a few extra bits?

    - by rclewis
    It's always an interesting task, to try and explain what you do to family and friends. I have described IPv6 as the "Next Generation Internet"  or "Second Internet" but the hollow expressions on my kids faces scream for the instant relief of the latest video game.  Never one to give up easily, I have formulated a new example - the Post Office... Similar to the Post Office the Internet delivers mail and packages based on addresses. As the number of residences, businesses, and delivery locations increased, the 5 digit ZIP Code (Washington, DC 20005) was expanded to ZIP+4  allowing for more precise delivery points (Postmaster General, Washington, DC 20260-3100). Ah, if only computers were as simple.  IPv6 isn't an add-on or expansion of the existing IPv4 Addressing, it is a new addressing model which will allow the internet to grow from a single computer in the basement of a university or your parents kitchen table, to support the multitude of smart phones, smart TV's, tablets, dvr's, and disk players, all clambering to connect for information. Unfortunetly there are only a finite number of IPv4 public addresses left, and those are being consumed at an ever increasing rate. Few people could have predicted the explosive growth of the internet or the shortage of IPv4 addresses we now face - but there is a "Plan B" and that is the vastly larger address space of IPv6.  Many in the industry have labeled this a "business continuity" problem,  when in fact most companies will be able to continue conducting business once they run out of existing IPv4 Addresses. The problem is really a Customer Continuity problem, how will businesses communicate with existing customers and reach new customers online who's only option is to adopt IPv6 when IPv4 is depleted? Perhaps a first step is publishing a blog that is also accessible via IPv6, it's just a few extra bits. Join us for the Oracle OpenWorld 2012 Session:   Navigating IPv6 @ Oracle Thursday, Oct. 4th 2:15PM - 3:15PM  Palace Hotel - Concert   Learn more about IPv6 Technologies at Oracle

    Read the article

  • What are the best practices to use NHiberante sessions in asp.net (mvc/web api) ?

    - by mrt181
    I have the following setup in my project: public class WebApiApplication : System.Web.HttpApplication { public static ISessionFactory SessionFactory { get; private set; } public WebApiApplication() { this.BeginRequest += delegate { var session = SessionFactory.OpenSession(); CurrentSessionContext.Bind(session); }; this.EndRequest += delegate { var session = SessionFactory.GetCurrentSession(); if (session == null) { return; } session = CurrentSessionContext.Unbind(SessionFactory); session.Dispose(); }; } protected void Application_Start() { AreaRegistration.RegisterAllAreas(); FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters); RouteConfig.RegisterRoutes(RouteTable.Routes); BundleConfig.RegisterBundles(BundleTable.Bundles); var assembly = Assembly.GetCallingAssembly(); SessionFactory = new NHibernateHelper(assembly, Server.MapPath("/")).SessionFactory; } } public class PositionsController : ApiController { private readonly ISession session; public PositionsController() { this.session = WebApiApplication.SessionFactory.GetCurrentSession(); } public IEnumerable<Position> Get() { var result = this.session.Query<Position>().Cacheable().ToList(); if (!result.Any()) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound)); } return result; } public HttpResponseMessage Post(PositionDataTransfer dto) { //TODO: Map dto to model IEnumerable<Position> positions = null; using (var transaction = this.session.BeginTransaction()) { this.session.SaveOrUpdate(positions); try { transaction.Commit(); } catch (StaleObjectStateException) { if (transaction != null && transaction.IsActive) { transaction.Rollback(); } } } var response = this.Request.CreateResponse(HttpStatusCode.Created, dto); response.Headers.Location = new Uri(this.Request.RequestUri.AbsoluteUri + "/" + dto.Name); return response; } public void Put(int id, string value) { //TODO: Implement PUT throw new NotImplementedException(); } public void Delete(int id) { //TODO: Implement DELETE throw new NotImplementedException(); } } I am not sure if this is the recommended way to insert the session into the controller. I was thinking about using DI but i am not sure how to inject the session that is opened and binded in the BeginRequest delegate into the Controllers constructor to get this public PositionsController(ISession session) { this.session = session; } Question: What is the recommended way to use NHiberante sessions in asp.net mvc/web api ?

    Read the article

  • Tiling Problem Solutions for Various Size "Dominoes"

    - by user67081
    I've got an interesting tiling problem, I have a large square image (size 128k so 131072 squares) with dimensons 256x512... I want to fill this image with certain grain types (a 1x1 tile, a 1x2 strip, a 2x1 strip, and 2x2 square) and have no overlap, no holes, and no extension past the image boundary. Given some probability for each of these grain types, a list of the number required to be placed is generated for each. Obviously an iterative/brute force method doesn't work well here if we just randomly place the pieces, instead a certain algorithm is required. 1) all 2x2 square grains are randomly placed until exhaustion. 2) 1x2 and 2x1 grains are randomly placed alternatively until exhaustion 3) the remaining 1x1 tiles are placed to fill in all holes. It turns out this algorithm works pretty well for some cases and has no problem filling the entire image, however as you might guess, increasing the probability (and thus number) of 1x2 and 2x1 grains eventually causes the placement to stall (since there are too many holes created by the strips and not all them can be placed). My approach to this solution has been as follows: 1) Create a mini-image of size 8x8 or 16x16. 2) Fill this image randomly and following the algorithm specified above so that the desired probability of the entire image is realized in the mini-image. 3) Create N of these mini-images and then randomly successively place them in the large image. Unfortunately there are some downfalls to this simplification. 1) given the small size of the mini-images, nailing an exact probability for the entire image is not possible. Example if I want p(2x1)=P(1x2)=0.4, the mini image may only give 0.41 as the closes probability. 2) The mini-images create a pseudo boundary where no overlaps occur which isn't really descriptive of the model this is being used for. 3) There is only a fixed number of mini-images so i'm not sure how random this really is. I'm really just looking to brainstorm about possible solutions to this. My main concern is really to nail down closer probabilities, now one might suggest I just increase the mini-image size. Well I have, and it turns out that in certain cases(p(1x2)=p(2x1)=0.5) the mini-image 16x16 isn't even iteratively solvable.. So it's pretty obvious how difficult it is to randomly solve this for anything greater than 8x8 sizes.. So I'd love to hear some ideas. Thanks

    Read the article

  • Integrating with a payment provider; Proper and robust OOP approach

    - by ExternalUse
    History We are currently using a so called redirect model for our online payments (where you send the payer to a payment gateway, where he inputs his payment details - the gateway will then return him to a success/failure callback page). That's easy and straight-forward, but unfortunately quite inconvenient and at times confusing for our customers (leaving the site, changing their credit card details with an additional login on another site etc). Intention & Problem description We are now intending to switch to an integrated approach using an exchange of XML requests and responses. My problem is on how to cater with all (or rather most) of the things that may happen during processing - bearing in mind that normally simplicity is robust whereas complexity is fragile. Examples User abort: The user inputs Credit Card details and hits submit. An XML message to the provider's gateway is sent and waiting for response. The user hits "stop" in his browser or closes the window. ignore_user_abort() in PHP may be an option - but is that reliable? might it be better to redirect the user to a "please wait"-page, that in turn opens an AJAX or other request to the actual processor that does not rely on the connection? Database goes away sounds over-complicated, but with e.g. a webserver in the States and a DB in the UK, it has happened and will happen again: User clicks together his order, payment request has been sent to the provider but the response cannot be stored in the database. What approach could I use, using PHP to sort of start an SQL like "Transaction" that only at the very end gets committed or rolled back, depending on the individual steps? Should then neither commit or roll back have happened, I could sort of "lock" the user to prevent him from paying again or to improperly account for payments - but how? And what else do I need to consider technically? None of the integration examples of e.g. Worldpay, Realex or SagePay offer any insight, and neither Google or my search terms were good enough to find somebody else's thoughts on this. Thank you very much for any insight on how you would approach this!

    Read the article

  • way to do if(x > x2) x = x2 with rotation?

    - by CyanPrime
    Alright, so I got this walking code, and some collision detection, now the collision detection returns a Vector3f of the closest point on the triangle that the projected position is at (pos + move), so then I project my position again in the walking method/function and if the projected position's x is the nearest point'x the projected position's x becomes the nearist point's x. same with their z points, but if I'm moving in a different direction from 0 degrees XZ how would I rotate the equation/condition? Here is what I got so far, and it's not working, as I go through walls, and such. Vector3f move = new Vector3f(0,0,0); move.x = (float)-Math.cos(Math.toRadians(yaw)); move.z = (float)-Math.sin(Math.toRadians(yaw)); // System.out.println("slopeNormal.z: " + slopeNormal.z + "move.z: " + move.z); move.normalise(); move.scale(movementSpeed * delta); float horizontaldotproduct = move.x * slopeNormal.x + move.z * slopeNormal.z; move.y = -horizontaldotproduct * slopeNormal.y; Vector3f dest = colCheck(pos, move, model, drawDist, movementSpeed, delta); Vector3f projPos = new Vector3f(pos); Vector3f.add(projPos, move, projPos); if(projPos.x > 0 && dest.x > 0 && projPos.x < dest.x) projPos.x = dest.x; else if(projPos.x < 0 && dest.x < 0 && projPos.x > dest.x) projPos.x = dest.x; if(projPos.z > 0 && dest.z > 0 && projPos.z < dest.z) projPos.z = dest.z; else if(projPos.z < 0 && dest.z < 0 && projPos.z > dest.z) projPos.z = dest.z; pos = new Vector3f(projPos);

    Read the article

  • SQLAlchemy session management in long-running process

    - by codeape
    Scenario: A .NET-based application server (Wonderware IAS/System Platform) hosts automation objects that communicate with various equipment on the factory floor. CPython is hosted inside this application server (using Python for .NET). The automation objects have scripting functionality built-in (using a custom, .NET-based language). These scripts call Python functions. The Python functions are part of a system to track Work-In-Progress on the factory floor. The purpose of the system is to track the produced widgets along the process, ensure that the widgets go through the process in the correct order, and check that certain conditions are met along the process. The widget production history and widget state is stored in a relational database, this is where SQLAlchemy plays its part. For example, when a widget passes a scanner, the automation software triggers the following script (written in the application server's custom scripting language): ' wiget_id and scanner_id provided by automation object ' ExecFunction() takes care of calling a CPython function retval = ExecFunction("WidgetScanned", widget_id, scanner_id); ' if the python function raises an Exception, ErrorOccured will be true ' in this case, any errors should cause the production line to stop. if (retval.ErrorOccured) then ProductionLine.Running = False; InformationBoard.DisplayText = "ERROR: " + retval.Exception.Message; InformationBoard.SoundAlarm = True end if; The script calls the WidgetScanned python function: # pywip/functions.py from pywip.database import session from pywip.model import Widget, WidgetHistoryItem from pywip import validation, StatusMessage from datetime import datetime def WidgetScanned(widget_id, scanner_id): widget = session.query(Widget).get(widget_id) validation.validate_widget_passed_scanner(widget, scanner) # raises exception on error widget.history.append(WidgetHistoryItem(timestamp=datetime.now(), action=u"SCANNED", scanner_id=scanner_id)) widget.last_scanner = scanner_id widget.last_update = datetime.now() return StatusMessage("OK") # ... there are a dozen similar functions My question is: How do I best manage SQLAlchemy sessions in this scenario? The application server is a long-running process, typically running months between restarts. The application server is single-threaded. Currently, I do it the following way: I apply a decorator to the functions I make avaliable to the application server: # pywip/iasfunctions.py from pywip import functions def ias_session_handling(func): def _ias_session_handling(*args, **kwargs): try: retval = func(*args, **kwargs) session.commit() return retval except: session.rollback() raise return _ias_session_handling # ... actually I populate this module with decorated versions of all the functions in pywip.functions dynamically WidgetScanned = ias_session_handling(functions.WidgetScanned) Question: Is the decorator above suitable for handling sessions in a long-running process? Should I call session.remove()? The SQLAlchemy session object is a scoped session: # pywip/database.py from sqlalchemy.orm import scoped_session, sessionmaker session = scoped_session(sessionmaker()) I want to keep the session management out of the basic functions. For two reasons: There is another family of functions, sequence functions. The sequence functions call several of the basic functions. One sequence function should equal one database transaction. I need to be able to use the library from other environments. a) From a TurboGears web application. In that case, session management is done by TurboGears. b) From an IPython shell. In that case, commit/rollback will be explicit. (I am truly sorry for the long question. But I felt I needed to explain the scenario. Perhaps not necessary?)

    Read the article

  • Temporary "Backup" of SharePoint Content During Feature and Solution Deployment

    - by ccomet
    I need to decide on a method for storing a subset of the content in a SharePoint site, so that when I delete and recreate certain lists as part of a feature activation, I can re-insert all of this content back where it should belong. I have an idea myself, but I don't know if it's the only method and more importantly, the right method. My client has me creating a SharePoint system for them to communicate with their clients. The business process has maybe 5 stages in it (maybe it's more, I don't even know because they don't tell me everything), and the current system I've written over the past months is maybe 2 stages through. This meets our deadline of completing those systems by Monday next week... but at that point my client is planning on making the site live from that point. In effect, their work with their clients will be running parallel with my work for them. As I complete my own work on a separate test server, I'll push each following stage of the process onto the live server. Scheduled downtimes during non-business times (like a weekend) will be available for me to perform these pushes. Keeping pace so that my development is faster than the actual business process is my own problem and off-topic... so let's get back to the problem I stated at the start of this post. In this system, we have sets of features which will create lists for their associated content types and field types when activated, and delete these lists when the feature is deactivated. Most updates don't need to deactivate and reactivate these features, such as workflow changes, custom actions, custom forms, and similar ilk. But there are some parts which do require this. On my test server, it's okay for me to obliterate lists, but once the site is live and there's real correspondence data, it's absolutely unacceptable to do this. So when I need to implement a new change in functionality, I need to be able to store the currently present data in several lists, deactivate the feature, reactivate the feature, and restore all of this data. Perhaps I have hoist myself by my own petard with the feature system I implemented. Unfortunately, the necessity to later on make several of these "project sites" meant I had to do a lot of my code with the concept of "Can be deployed repeatedly" in mind. My current plan is to run through lists and libraries which will be affected by the particular feature that is to be reset. Files and all of their versions will be saved in a directory on the server. Then, a set of text files will be used to store all of the important field values for the items. This includes a lot of cross-list reference lookups that will need to be maintained, but that's simple enough. Then, I deactivate the feature, deploy the new solution, and reactivate the feature. We upload all of the files in the order specified by their versions and update them with the stored fields for those versions, so that we retain the version structure. As each one is first uploaded, the new ID is picked out, and all relevant lookups in the rest of the files are updated (in some manner that I make sure I don't re-update it later with an incorrect value, of course). After that, we run through all the rest of the items in the order most conducive to keeping the relational data correct. This roughly summarizes what my current plan is. To my advantage, there are no long running workflows in the system that will be affected by this, so there's nothing I will have to worry about making sure nothing is "still running" when I do this stuff. I don't really know all the cons of this approach... I can imagine they're quite hefty. But I'm unsure what other choices I even have, and my searches haven't turned up anything. Is there anyone who can think of a better idea? Or will anyone just tell me that I really have no other choice? Thanks in advance!

    Read the article

  • Suggestions on how to map from Domain (ORM) objects to Data Transfer Objects (DTO)

    - by FryHard
    The current system that I am working on makes use of Castle Activerecord to provide ORM (Object Relational Mapping) between the Domain objects and the database. This is all well and good and at most times actually works well! The problem comes about with Castle Activerecords support for asynchronous execution, well, more specifically the SessionScope that manages the session that objects belong to. Long story short, bad stuff happens! We are therefore looking for a way to easily convert (think automagically) from the Domain objects (who know that a DB exists and care) to the DTO object (who know nothing about the DB and care not for sessions, mapping attributes or all thing ORM). Does anyone have suggestions on doing this. For the start I am looking for a basic One to One mapping of object. Domain object Person will be mapped to say PersonDTO. I do not want to do this manually since it is a waste. Obviously reflection comes to mind, but I am hoping with some of the better IT knowledge floating around this site that "cooler" will be suggested. Oh, I am working in C#, the ORM objects as said before a mapped with Castle ActiveRecord. Example code: By @ajmastrean's request I have linked to an example that I have (badly) mocked together. The example has a capture form, capture form controller, domain objects, activerecord repository and an async helper. It is slightly big (3MB) because I included the ActiveRecored dll's needed to get it running. You will need to create a database called ActiveRecordAsync on your local machine or just change the .config file. Basic details of example: The Capture Form The capture form has a reference to the contoller private CompanyCaptureController MyController { get; set; } On initialise of the form it calls MyController.Load() private void InitForm () { MyController = new CompanyCaptureController(this); MyController.Load(); } This will return back to a method called LoadComplete() public void LoadCompleted (Company loadCompany) { _context.Post(delegate { CurrentItem = loadCompany; bindingSource.DataSource = CurrentItem; bindingSource.ResetCurrentItem(); //TOTO: This line will thow the exception since the session scope used to fetch loadCompany is now gone. grdEmployees.DataSource = loadCompany.Employees; }, null); } } this is where the "bad stuff" occurs, since we are using the child list of Company that is set as Lazy load. The Controller The controller has a Load method that was called from the form, it then calls the Asyc helper to asynchronously call the LoadCompany method and then return to the Capture form's LoadComplete method. public void Load () { new AsyncListLoad<Company>().BeginLoad(LoadCompany, Form.LoadCompleted); } The LoadCompany() method simply makes use of the Repository to find a know company. public Company LoadCompany() { return ActiveRecordRepository<Company>.Find(Setup.company.Identifier); } The rest of the example is rather generic, it has two domain classes which inherit from a base class, a setup file to instert some data and the repository to provide the ActiveRecordMediator abilities.

    Read the article

  • what persistence layer (xml or mysql) should i use for this xml data?

    - by fayer
    i wonder how i could store a xml structure in a persistence layer. cause the relational data looks like: <entity id="1000070"> <name>apple</name> <entities> <entity id="7002870"> <name>mac</name> <entities> <entity id="7002907"> <name>leopard</name> <entities> <entity id="7024080"> <name>safari</name> </entity> <entity id="7024701"> <name>finder</name> </entity> </entities> </entity> </entities> </entity> <entity id="7024080"> <name>iphone</name> <entities> <entity id="7024080"> <name>3g</name> </entity> <entity id="7024701"> <name>3gs</name> </entity> </entities> </entity> <entity id="7024080"> <name>ipad</name> </entity> </entities> </entity> as you can see, it has no static structure but a dynamical one. mac got 2 descendant levels while iphone got 1 and ipad got 0. i wonder how i could store this data the best way? what are my options. cause it seems impossible to store it in a mysql database due to this dynamical structure. is the only way to store it as a xml file then? is the speed of getting information (xpath/xquery/simplexml) from a xml file worse or greater than from mysql? what are the pros and cons? do i have other options? is storing information in xml files, suited for a lot of users accessing it at the same time? would be great with feedbacks!! thanks! EDIT: now i noticed that i could use something called xml database to store xml data. could someone shed a light on this issue? cause apparently its not as simple as just store data in a xml file?

    Read the article

  • Reinstall Acer OEM Windows 8, Windows 8 Recovery for Acer Aspire V5 122p

    - by stwindr
    My Acer Aspire V5-122P-61456G50NSS, model - MS2377, has crashed all together. It came preloaded with Windows 8 and I upgraded to windows 8.1 3-4 days before crash. Unfortunately I did not make any recovery media before the crash. While accessing the eRecovery on Acer store with my PC's serial no. it says nor RCD available for this. I tried recovery by loading recovery manager (Left Alt + F10) Various other advanced startup options (like holding shift key while turning on or pressing F8 key) returns nothing but no luck. However I am able to enter BIOS. After doing research on above condition on various PC forums, now my questions: I read that a 'Windows Recovery Drive' can be made on any PC running Windows 8 and could be used to repair another PC. Does anybody in SuperUser community have that (or a link to download the same from somewhere? as I'm unable to find anybody running windows 8 among my friends). I downloaded a window 8 Pro ISO and made a bootable USB. I was able to go to 'Repair Your Computer' option and after going to 'Reset your PC' option found that my recovery partition has gone/missing. I tried all options available but no luck. Then I tried to install with that Windows 8 Pro ISO but got message: "The product key entered does not match any of the Windows images available for installation. Enter a different product key". before this message I did not got any form to fill product key! Does this mean that the installer was picking up the key from BIOS (OEM Key)? and may be the installation did not succeeded because OEM Windows version was Window 8 and I was trying to install Windows 8 Pro. If that is the case then, could somebody please send me link to download an Windows 8 ISO? I am helpless and couldn't find anywhere on internet (without having to pay for a new key, but I should not pay as the installer will use OEM key).

    Read the article

  • How to install Konica Minolta PagePro 1300W to Windows 7 64-bit?

    - by jsalonen
    I recently switched to Windows 7 (Home Pro, 64-bit) to discover that my Konica Minolta PagePro 1300W printer no longer works. When connected, Win7 prompts that it can not install a driver for the device. I have done a lot of googling to solve this problem, with no luck so far. From Konica Minolta official website, I can find drivers only for Windows XP/2000. My current reasoning is that they currently don't and most likely are not going to support Win7 let alone 64-bit version of it for this rather old printer. So my question is: does anyone have any good tips on how to make this printer work on my system? Is there any other place I could look for drivers, or in generally, do you know any workarounds that could let me printer work? One of the workaround I have been considering is to install a Windows XP / Ubuntu Linux on a virtualbox and use that system when I really really need to printer. This is of course not the optimal solution, but would let me possibly to use the printer until I buy a newer model.

    Read the article

  • Smart Array P400 - Accelerator Replacement Battery Failure

    - by inflammable
    TL;DR - Is the immediate failure of a replacement battery, for a failed battery, on a battery backed accelerator for a Smart Array P400 controller a common occurrence? Or are we likely to have an storage controller with an impending and critical fault? We have a slightly confusing situation with a Smart Array P400 storage controller with the 512mb battery backed accelerator addon on an HP DL380 server. The storage controller is (afaik) running the latest firmware and driver: Model: Smart Array P400 Controller Status: OK Firmware Version: 7.24 Serial Number: *snip* Rebuild Priority: Medium Expand Priority: Medium Number Of Ports: 2 The storage diagnostic (both on the both boot-up screen for the controller and within the 'Management Homepage' and the 'HP Array Diagnostic Utility') recently starting showing the following status a fault for the battery for the accelerator: Accelerator Status: Temporarily Disabled Error Code: Cache Disabled Low Batteries Serial Number: *snip* Total Memory: 524288 KB Read Cache: 25% Write Cache: 75% Battery Status: Failed Read Errors: 0 Write Errors: 0 We replaced the battery with a new unit (a visual inspection of the P400 card showing nothing unusual) and saw the same fault - but expected this to disappear over the course of a few hours/days as it charged. This didn't happy, and the fault status remains the same as above. Given the battery is a genuine part from HP, I wouldn't have expected a replacement battery to fail straight away, or to be dead-on-arrival (is that naivety on my part?). Is the immediate failure of a replacement battery, for a failed battery, on a battery backed accelerator a common occurrence? Or are we likely to have an storage controller with an impending and critical fault? Is there any diagnostic that could tell me more about the failed battery, without cracking the server open again? Many thanks!

    Read the article

  • BIOS interrupts, privilege levels and paging

    - by Jack
    Hi, I was learning about Intel 8086-80486 CPUs and their interactions with HW. But I still don´t understand it quite well. Please, help me fill blank spots. First, I know that CPU communicates with HW using BIOS interrupts. But, what really happens in PC, when I call some INT instruction? I know that according the interrupt table some instructions begin to execute, but how by executing some instructions can BIOS recognize what I want to do? Becouse as far as I know, CPU has no extra communication channel with BIOS, it can only adress memory and receive data. So how can I instruct BIOS to do something, when I can only address RAM? Next thing I don't understand is about privilege levels. I know about ring model, and access rights, but how does the CPU know which privilege level has executed an instruction? I think that these privileges apply only when intruction is trying to address memory, but how does an application get its privilege level? I mean I know its level 3, but how is it set? And last thing, I know that paging is address scheme that is used to support aplication-transparent virtual memory, or swapping, but I could not find any information about how paging is tied with protected mode. Like if paging is like next mode independent of protected mode, or its somehow implemented within protected mode. And if it is implemented in protected mode, isn´t it too slow, to first address application space, then offset, and then paging folder, page and offset once again?

    Read the article

  • blu-ray archiving in vmware ESXi 4

    - by spacecadet77
    Hi, I need some advice about using blu-ray writer for archiving data on vmware ESXi 4. At office we have IBM System x3400 Tower server with ESXi 4 hipervisor and OpenSuse and CentOS GNU/Linux system as guests. Will blu-ray writer work in this setup, and if it will is there any particular model you can suggest. Best regards IBM System x3400 Tower server specification: 1x Intel Quad-Core Xeon E5410 2.33GHz/ 12MB/ 1333MHz (2x CPU max) Intel 5000P chipset, 2x 1GB PC2-5300 DDR2 667MHz SDRAM ECC Chipkill (32GB max) 2x4GB (2x2GB) PC2-5300 CL5 ECC DDR2 FBDIMM (x3400, x3550, x3650) SAS/SATA Hot-Swap Open Bay (0xHDD std, 4xHDD max, 8xHDD optional) ServeRAID 8K dual channel SAS/SATA controller (RAID 0,1,1E,10,5,6, 256MB, Battery Backup) Graphics ATI® RN50(ES1000) 16MB DDR, CD-RW/DVD Combo no FDD GigaEthernet, Tower with Power Supply 835W (opt Redudant) Slot 1: half-length, PCI-Express x8(x4 electrical) Slot 2: full, PCI-Express x8 Slot 3: full, PCI-Express x8 Slot 4: full, 64-bit 133MHz 3.3v PCI-X Slot 5: full, 64-bit 133MHz 3.3v PCI-X , Slot 6: half-length, 32-bit 33MHz 5.0v PCI ports: 4x USB (Vers 2.0), 2x PS/2, parallel, 2x serial (9-pin), VGA, RJ-45 (ethernet ), RJ-45 (sys mgm) HDD 4 x TB 7200rpm / Serial ATA II 3.0Gb/s / 16MB, RoHS

    Read the article

  • BIOS interrupts, priviledge levels and paging

    - by Jack
    Hi, I was learning about Intel 8086-80486 CPUs and their interactions with HW. But I still don´t understand it quite well. Please, help me fill blank spots. First, I know that CPU communicates with HW using BIOS interrupts. But, what really happens in PC, when I call some INT instruction? I know that according the interrupt table some instructions begin to execute, but how by executing some instructions can BIOS recognize what I want to do? Becouse as far as I know, CPU has no extra communication channel with BIOS, it can only adress memory and receive data. So how can I instruct BIOS to do something, when I can only adress RAM? Next thing I dont understand is about priviledge levels. I know about ring model, and acess rights, but how CPU knows which priviledge level has executed instruction? I think that these priviledges apply only when intruction is trying to adress memory, but how applications gets its priviledge level? I mean I know its level 3, but how its set? And last thing, I know that paging is adress scheme that is used to support aplication-transparent virtual memory, or swaping, but I could not find any informations about how is paging tied with protected mode. Like if paging is like next mode independent of protectet mode, or its somehow implemented within protected mode. And if it is implemented in protected mode, isn´t it too slow, to first adress application space, than offset, and than paging folder, page and offset once again? Thank you for every response.

    Read the article

  • Windows7 - “The specified network password is not correct.” when the password is in fact correct.

    - by Win7 Home User
    I have a samba server setup for some time now. It is a Hardware NAS - which unfortunately does not provide access to the Samba logs. (the exact model of the NAS is called Addonics NAS Adapter ) I also have a Windows Vista and a Windows XP machine - from both I am able to map \\192.168.0.20\Smd with no errors ( net use l: \\192.168.0.20\Smd works, after asking for my username and password). I also bought a brand new computer, with Windows 7, and when I try to execute the same exact net use command on it - using the exact same username/password pair, I get a "The specified network password is not correct." message. I also tried mapping from the Windows explorer menu, and got the same error. I synchronized the clocks of the two machines, tried again... and yet the same error persists. So what is really surprising here is that mapping works from WindowXP and Windows Vista machines, but fails from a Windows7 machine using the exact same command and username/password - Anyone has any idea of what could be causing this or how to solve the problem? Thanks

    Read the article

< Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >