Search Results

Search found 4983 results on 200 pages for 'excel formula'.

Page 182/200 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • Small store infrastructure - where to begin?

    - by KevinM1
    It looks like my older brother is about to change jobs - from lawyer to shooting range proprietor - and since I'm the family 'computer guy' I have the task of coming up with and setting up the in-store equipment. Only problem, I don't know how to start or where to look. I'm a web programmer, not an IT specialist. To that end, I figured I should ask the pros. Users: 3 (myself, my brother, and his business partner) Equipment: 1 Windows (likely 7) desktop for POS software, 1 Windows desktop/laptop for backroom use (bookkeeping, etc.) Other: ?? I'm looking for a reliable and, well, idiot-proof way to handle backups. Neither my brother nor his business partner are tech savvy (A web browser, email, MS Word and Excel are about the extent of their knowledge), so I need something they can handle. On-site would be preferable to off-site, given my brother's hesitance to have sensitive business data be handled by an outside source. I'm also looking for a small on-site server. I estimate that, at most, only 2-3 users will need access. A linux solution would keep costs down, but I'm concerned about Windows <- linux interoperability. Would the store security cameras' storage be handled by the security company, or would we have to stream that data to our own server? I know from my own experience with personal security that the company gives/loans a recording device to the home owner, but I'm not sure about business security. I know this sounds like a shopping list, and it's pretty vague. I wish I could give more detail, but between my own ignorance and things not being 100% nailed down on the business end, I'm a bit stuck. At the very least I'd like a nudge - links on a place to start, what to look for, things I need to think about, etc. - for this endeavor. Thanks.

    Read the article

  • Application for time and project management

    - by user10826
    I want to improve the way I organize my projects/tasks/schedule What I do now is: keep an excel sheet with the name of the most important tasks/projects, I look at it at the beginning of each day and decide the ones I will focus on on iCal I write down events for each day, or for a concrete time (13 to 14 hours). I set up each day the tasks I want to accomlish, and allocate them hours I use Things (culture code) to keep info about tasks and projects not very important and which are not time allocated yet (GTD name = someday) I use Mail on Mac and create folders for the mails I want to process with the name of the different projects I save the main info for each project on freemind maps My system works well at the moment but it is pretty complicated to use. I want to make it better and I am looking for something with these requirements: must be 100% offline accessable it should use as less programs/resources as possible, ideally just one program should be able to manage all my info I can use the GTD methodology mixed with priorities and I can allocate each task converted to event on my calendar I can have different daily/weekly, etc views on a calendar to see the "big picture" must run on mac os x leopard price does not matter, I will pay for this So, according to your experience, can you recommend me something like this? Thanks

    Read the article

  • Prevent Outlook 2010 Insert Picture resizing image

    - by Rup
    When I "Insert Picture" a JPEG in Outlook 2010 it automatically resizes the image and, I think, recompresses it too. I realise this would be useful for photographs or for people who try to email 1MB BMPs but I would like to email around an image at the original pixel size without recompression. Is there a way to turn this off, or better still choose settings for each image insert? I found this page in the Office help. It's for Word, PowerPoint and Excel not Outlook but points you at File, Options, Advanced, Image Settings. There's no equivalent section in Outlook. I know Outlook uses Word as its editor so I've looked at Word's settings but there isn't an 'original size' here: there's only 'turn off image recompression' and pick target DPI from 96, 150, 220. I guess Office is finding a DPI value in the JPEG file and scaling it up or down to match this setting. I can't find an equivalent option in Outlook's options menu but there's so many settings and pop-up dialogs I may have missed something. Picture Format, Reset image size resets the image to the rescaled version, not the original. I can't see a way to edit a pixel value into size values in the image properties after insert. Thanks! I realise I can probably achieve this by editing the image metadata in PhotoShop elements or similar but there ought to be a way without editing the file? This is new behaviour in Outlook 2010; 2007 didn't do this.

    Read the article

  • How do I count the times each number appears in columns of numbers?

    - by Andy C.
    I am sure this must be easy, but I am inexperienced. About the best way to think of my problem is to think of it as trying to sort and then count lottery numbers. To stay simple, let's do a Pick 3 game. Let's look at 10 drawings. I would split each drawn number into a separate column: DATE BALL#1 BALL#2 BALL#3 3/1 1 3 5 3/2 3 7 8 3/3 2 2 1 3/4 5 7 6 3/5 2 3 1 3/6 0 5 9 3/7 3 7 0 3/8 6 8 4 3/9 2 4 3 3/10 7 1 2 I would like to be able to build formulas into cells that would tell me how many times each number appeared overall, and how many times each number appeared in the position it occurred. Like this (using the above example): Number Overall Count Ball#1 Count Ball#2 Count Ball#3 Count 0 2 1 0 1 1 4 1 1 2 (That is, The number zero appears twice overall, and came up once as the first number drawn; zero times as the middle ball; and once as the third ball. Likewise, the number 1 was drawn four times in our 10-day period. It was the first ball once, the second ball once and the third ball twice.) And so on. All help appreciated. I have access to Excel and Microsoft Works, or of course if there is a Google Docs way to handle this All thanks for any help.

    Read the article

  • Outlook Calendar Attachments to have limited access to just Required attendees

    - by Jason Pearce
    The management team at my company often attaches documents (Word, Excel, PDFs) to their Outlook Calendar meeting requests. The meeting requests are sent to the managers, but also to their assistants. The desire is to have everyone be able to view the full meeting request and its content, but limit the ability to open the attachments to just the managers. Is there a way in Outlook 2003 and/or 2007 to limit access to attachments that accompany meeting requests? Ideally, can access to the attachments be controlled by the "Select Attendees and Resources" window when selecting individuals from the Global Address List. Can those in the Required field have access to the attachments while those in the Optional or Resources fields not have access? My suggestion was to simply place all meeting attachments in a shared network folder that has read/write access limited to managers. They would then just place fully qualified links to those files in the body of the Meeting Request. While everyone would receive and see the links, only a few would have access. This, however, wasn't easy enough for them, so I'm looking for some other ideas.

    Read the article

  • Windows 2008 R2 file share - any way to "lock it down" outside of a 3rd party app?

    - by TheCleaner
    I have a 3rd party app that "makes a call" to write files to a file share on our network using the currently logged in credentials of the Windows domain user. Meaning the 3rd party app doesn't pass the apps credentials but simply issues a behind the scenes copy command to take a source file specified and copy/move it to the destination "repository" on the file share. The basic premise is that it keeps revisions/approvals for Document Control (think svn/git I guess, similar to this question: Lock down Windows folder to only be updatable by SVN). This all works fine...but here's my issue: I need a way to lock down the file share from being accessed/modified outside of using the 3rd party app (meaning prevent explorer/word/excel/etc from getting to that share). I know I can do the following: make the share a hidden share ($) - this definitely helps. Most users would have zero clue on how to get to such a share. Solves probably 95% of my issue. go one step further and set the "Hidden" attribute on the folders in the hidden share - this would go a little further in that even if a user knows the path to the hidden share like \\server\hidden$ they still won't see folders in that share without changing their explorer options to "show hidden files/folder Any other ideas on how I can lock this down? The users still need modify rights to this share/folders since the 3rd party app relies on their Windows permissions to that location when copying the files into it. I can't really use 3rd party tools to password protect the folder/share without causing the 3rd party app functions to fail.

    Read the article

  • How to configure default text selection behavior in Windows XP, 7? (eg. mouse click selects entire word vs. mouse click inserts an active cursor)

    - by Mouse of Fury
    I find the mouse click behavior of Windows XP and Windows 7 annoying and intrusive. I don't remember Windows NT being quite this bad, or MacOS 7 - 10 which I used in the nineties. When I'm using a browser and I click on a text field - for example, the address bar, or a search box - the first thing which happens is the entire field is selected.Subsequent clicks seem to select parts of words, often deciding arbitrarily to exclude or include adjacent punctuation. The same in Excel and other apps, and when trying to rename files, so I'm assuming this behavior comes from a system-wide text handling routine. I frequently want to edit text, cut out or replace odd parts of the insides of words or chunks of sentences, and often find that to get a simple cursor to insert I have to click the mouse up to 4 times in succession. I've had to do a lot of this recently and it has been driving me insane. Is there a place at the system level where this can be configured? In a perfect world, I'd like a single click on a new text area to insert a cursor point, and a rapid double click to select the entire area. Words or text within the area could be selected by inserting a cursor, holding down the mouse button and dragging to the exact point where I want the selection to end - even if that's in the middle of a word. No, I don't need or want Windows to "smart select" a word or sentence for me. I've looked in the Mouse and Accessibility Options control panels (Windows XP). Haven't found anything even close. Thanks -

    Read the article

  • How do I make a PPT file as small as possible?

    - by grunwald2.0
    Currently I am agonizing over several large presentation files, which I happened to reprint to PDFs... One thing I wondered: Do PPT's (from Microsoft Powerpoint) always to have to be that big? And what would be the strategies to make a PPT smaller? (If we say "ceterus paribus" at e.g. 25 slides and assuming that one isn't allowed to use a cloud-based service like GDocs, rocketslide or Prezio.) Of course there are the obvious "bad guys": Images and graphics. But: How about roll-over animations etc, who knows how much space they take? How about "smart arts"? Could one save file size if one would use "Open Office" or "Libre Office" Impress? (I didn't try it yet.) And "what if": What if we need to include e.g. five images (or charts that can't be remade in Excel in time), how would we best reduce the file size impact of those five images, if we needed to? I ask all this from an honest "business" perspective. I am no nerd or "Microsoft MVP" and I don't intend on delving into LATeX or similar yet. But that doesn't mean that I am not curious and very willing to learn. I am basically interested in (proven) best practices. Yes I know this question is lacking "initial research", but I think the perspective of my question is interesting and unique to a lot of people and if we intend to make SE a "Q&A" / Wiki kind-of reference site, this question might be a good way to "collect" advice on a question that has a very defined goal: Minimum file-size.

    Read the article

  • Permissions error when creating desktop shortcut

    - by Ryan M.
    Hey guys, I have a user that's got a weird permissions problem on Windows 7. He's trying to create a shortcut for Outlook on his desktop(he doesn't want it in his start menu or his taskbar...). If we right click the outlook.exe and do Send to Desktop, it works just fine. If we do a search for "outlook" in the search bar, and then try and drag and drop the outlook icon to the desktop, we get the error message "You need Permission to perform this action. You require permission from SYSTEM to make changes to this file: Microsoft Office Outlook 2007". Dragging and dropping other exe's onto the desktop work just fine. They create shortcuts without any problems. But if I try to do ANY of the Office programs (Word, Excel, Outlook, etc..) I get this permission error. Any ideas? He's using an A.D. account and he's in the local administrators group. He's an executive so he's not accepting "this isn't a real problem because I found another way to make a shortcut" as an answer. Any help is appreciated.

    Read the article

  • Windows mounted network drives slow after upgrading switch

    - by Kver
    On our small business network our old 10/100 consumer grade switch gave up the ghost, and we replaced it with a proper business-grade gigabyte switch. After wiring it in our Linux and Mac users immediately got back to working off of network drives; But 2 of our 3 Windows 7 PCs have suddenly experienced a tremendous slowdown with mapped network drives; Windows will become stuck "discovering" a folder causing applications to freeze when trying to open files. It will instantly display and browse files, but the moment you try to open one the bug hits. To remedy this we have our users copying files to the desktop, but it can take a few minutes while windows is stuck "calculating" the time it will take to copy. These aren't big files, mostly excel sheets less than 500KB - these operations are instant on Linux and Mac. (The third Windows machine is having no issues) I've tried remapping the drives, mapping to different drive letters, rebooting, etc. I'm at a loss, because switches are mostly transparent, and it's only after the switch was replaced that the Windows PCs started acting up. What black-magic voodoo am I missing to make Windows work? Thank you.

    Read the article

  • How can you exclude folders from appearing in the Recent Items feature of Windows 7 start menu?

    - by Jordan Weinstein
    to be clear, I like the 'recent items' feature. I do not want to turn it off. I work at a law firm where we integrate Office with a document management system (DMS). If recent items are turned on, those DMS opened documents will show up in the recent items of a Windows 7 start menu when hovering over Word (or Excel\PPT etc). However the integration doesn't work correctly so if a user were to click on one of those, something wouldn't work right. In short, we've always needed to turn off Recent Items completely for a DMS integrated workstation. Curious if anyone knows of a way to exclude a directory from being "captured" so to speak. When you open a DMS document, the file gets copied to local directory where it saves it as you work, until you close and it checks it back in to the DMS. I'd like to be able to exclude that local directory from recent items. so local files in My Docs and Desktop would show up in recent items, but not DMS opened documents. Hope this makes sense.

    Read the article

  • visually documenting web server configuration and infrastructure

    - by Alex Ciarlillo
    I have just finished a large re-organization and update of our institutions web server(s). This server hosts 3 virtual hosts, 3-4 blogs, 2 wikis, some legacy static HTML pages, and many hosted documents (PDF, .jpg, .xls). I have organized the site into a structure of something like: /var/www/sites/vhost1, vhost2, vhost3 .../wordpress/blogX .../mediawiki/wikiX Data is in a seperate directory structure so I can run a cron task over it to make sure it is all writeable and such. I then symlink to these data directories for each application. /var/www/data/vhost1, vhost2, vhost3 .../wordpress/blogX/uploads .../mediawiki/wikiX/images All Apache configs are in /etc/httpd/conf.d/vhosts.d/vhost1,2,3.conf On top of this there is also a testing server which mirrors this setup. Once changes are fully tested, they are rsynced down to the live server. All the wordpress installs and mediawiki installs are straight form SVN and updates are done by switching branches or "svn up". So my question is how can I best document to share with a) co-workers, b) possible future replacement, c) myself 6 months from now. Obviously I can make a wiki page, excel document, whatever and fill it with text, but I am looking for a more visual representation that I can use to explain the architecture to less-technical people. Ideally it would be awesome if this visual representation could then be expanded to get more technical details.

    Read the article

  • Automating Access 2007 Queries (changing one criteria)

    - by Graphth
    So, I have 6 queries and I want to run them all once at the end of each month. (I know a bit about SQL but they're simply built using Access's design view). So, in the next few days, perhaps I'll run the 6 queries for May, as May just ended. I only want the data from the month that just ended, so the query has Criteria set as the name of the month (e.g., May). Now, it's not hugely time consuming to change all of these each month, but is there some way to automate this? Currently, they're all set to April and I want to change them all to May when I run them in a few days. And each month, I'd like to type the month (perhaps in a textbox in a form or somewhere else if you know a better way) just once and have it change all 6 queries, without having to manually open all 6, scroll over to the right field and change the Criteria. Note (about VBA): I have used Excel VBA so I know the basics of VBA but I don't really know anything specific to Access (other than seeing code a few times). And, others will use this who do not know anything about Access VBA. So, I think I have found a similar question/answer that could do this in VBA, but I'd rather do it some other way. If the query needs to be slightly redesigned later, probably by someone who doesn't know Access VBA at all, it'd be nice to have a solution not involving VBA if that is even possible.

    Read the article

  • trying to divide complex numbers, division by zero

    - by user553619
    I'm trying the program below to divide complex numbers, it works for complex numbers but not when the denominator is real (i.e, the complex part is zero). Division by zero occurs in this line ratio = b->r / b->i ;, when the complex part b->i is zero (in the case of a real denominator). How do I get around this? and why did the programmer do this, instead of the more straightforward rule for complex division The wikipedia rule seems to be better, and no division by zero error would occur here. Did I miss something? Why did the programmer not use the wikipedia formula?? Thanks /*! @file dcomplex.c * \brief Common arithmetic for complex type * * <pre> * -- SuperLU routine (version 2.0) -- * Univ. of California Berkeley, Xerox Palo Alto Research Center, * and Lawrence Berkeley National Lab. * November 15, 1997 * * This file defines common arithmetic operations for complex type. * </pre> */ #include <math.h> #include <stdlib.h> #include <stdio.h> #include "slu_dcomplex.h" /*! \brief Complex Division c = a/b */ void z_div(doublecomplex *c, doublecomplex *a, doublecomplex *b) { double ratio, den; double abr, abi, cr, ci; if( (abr = b->r) < 0.) abr = - abr; if( (abi = b->i) < 0.) abi = - abi; if( abr <= abi ) { if (abi == 0) { fprintf(stderr, "z_div.c: division by zero\n"); exit(-1); } ratio = b->r / b->i ; den = b->i * (1 + ratio*ratio); cr = (a->r*ratio + a->i) / den; ci = (a->i*ratio - a->r) / den; } else { ratio = b->i / b->r ; den = b->r * (1 + ratio*ratio); cr = (a->r + a->i*ratio) / den; ci = (a->i - a->r*ratio) / den; } c->r = cr; c->i = ci; }

    Read the article

  • Creating thousands of records in Rails

    - by willCosgrove
    Let me set the stage: My application deals with gift cards. When we create cards they have to have a unique string that the user can use to redeem it with. So when someone orders our gift cards, like a retailer, we need to make a lot of new card objects and store them in the DB. With that in mind, I'm trying to see how quickly I can have my application generate 100,000 Cards. Database expert, I am not, so I need someone to explain this little phenomena: When I create 1000 Cards, it takes 5 seconds. When I create 100,000 cards it should take 500 seconds right? Now I know what you're wanting to see, the card creation method I'm using, because the first assumption would be that it's getting slower because it's checking the uniqueness of a bunch of cards, more as it goes along. But I can show you my rake task desc "Creates cards for a retailer" task :order_cards, [:number_of_cards, :value, :retailer_name] => :environment do |t, args| t = Time.now puts "Searching for retailer" @retailer = Retailer.find_by_name(args[:retailer_name]) puts "Retailer found" puts "Generating codes" value = args[:value].to_i number_of_cards = args[:number_of_cards].to_i codes = [] top_off_codes(codes, number_of_cards) while codes != codes.uniq codes.uniq! top_off_codes(codes, number_of_cards) end stored_codes = Card.all.collect do |c| c.code end while codes != (codes - stored_codes) codes -= stored_codes top_off_codes(codes, number_of_cards) end puts "Codes are unique and generated" puts "Creating bundle" @bundle = @retailer.bundles.create!(:value => value) puts "Bundle created" puts "Creating cards" @bundle.transaction do codes.each do |code| @bundle.cards.create!(:code => code) end end puts "Cards generated in #{Time.now - t}s" end def top_off_codes(codes, intended_number) (intended_number - codes.size).times do codes << ReadableRandom.get(CODE_LENGTH) end end I'm using a gem called readable_random for the unique code. So if you read through all of that code, you'll see that it does all of it's uniqueness testing before it ever starts creating cards. It also writes status updates to the screen while it's running, and it always sits for a while at creating. Meanwhile it flies through the uniqueness tests. So my question to the stackoverflow community is: Why is my database slowing down as I add more cards? Why is this not a linear function in regards to time per card? I'm sure the answer is simple and I'm just a moron who knows nothing about data storage. And if anyone has any suggestions, how would you optimize this method, and how fast do you think you could get it to create 100,000 cards? (When I plotted out my times on a graph and did a quick curve fit to get my line formula, I calculated how long it would take to create 100,000 cards with my current code and it says 5.5 hours. That maybe completely wrong, I'm not sure. But if it stays on the line I curve fitted, it would be right around there.)

    Read the article

  • Scipy Negative Distance? What?

    - by disappearedng
    I have a input file which are all floating point numbers to 4 decimal place. i.e. 13359 0.0000 0.0000 0.0001 0.0001 0.0002` 0.0003 0.0007 ... (the first is the id). My class uses the loadVectorsFromFile method which multiplies it by 10000 and then int() these numbers. On top of that, I also loop through each vector to ensure that there are no negative values inside. However, when I perform _hclustering, I am continually seeing the error, "Linkage Z contains negative values". I seriously think this is a bug because: I checked my values, the values are no where small enough or big enough to approach the limits of the floating point numbers and the formula that I used to derive the values in the file uses absolute value (my input is DEFINITELY right). Can someone enligten me as to why I am seeing this weird error? What is going on that is causing this negative distance error? ===== def loadVectorsFromFile(self, limit, loc, assertAllPositive=True, inflate=True): """Inflate to prevent "negative" distance, we use 4 decimal points, so *10000 """ vectors = {} self.winfo("Each vector is set to have %d limit in length" % limit) with open( loc ) as inf: for line in filter(None, inf.read().split('\n')): l = line.split('\t') if limit: scores = map(float, l[1:limit+1]) else: scores = map(float, l[1:]) if inflate: vectors[ l[0]] = map( lambda x: int(x*10000), scores) #int might save space else: vectors[ l[0]] = scores if assertAllPositive: #Assert that it has no negative value for dirID, l in vectors.iteritems(): if reduce(operator.or_, map( lambda x: x < 0, l)): self.werror( "Vector %s has negative values!" % dirID) return vectors def main( self, inputDir, outputDir, limit=0, inFname="data.vectors.all", mappingFname='all.id.features.group.intermediate'): """ Loads vector from a file and start clustering INPUT vectors is { featureID: tfidfVector (list), } """ IDFeatureDic = loadIdFeatureGroupDicFromIntermediate( pjoin(self.configDir, mappingFname)) if not os.path.exists(outputDir): os.makedirs(outputDir) vectors = self.loadVectorsFromFile( limit, pjoin( inputDir, inFname)) for threshold in map( lambda x:float(x)/30, range(20,30)): clusters = self._hclustering(threshold, vectors) if clusters: outputLoc = pjoin(outputDir, "threshold.%s.result" % str(threshold)) with open(outputLoc, 'w') as outf: for clusterNo, cluster in clusters.iteritems(): outf.write('%s\n' % str(clusterNo)) for featureID in cluster: feature, group = IDFeatureDic[featureID] outline = "%s\t%s\n" % (feature, group) outf.write(outline.encode('utf-8')) outf.write("\n") else: continue def _hclustering(self, threshold, vectors): """function which you should call to vary the threshold vectors: { featureID: [ tfidf scores, tfidf score, .. ] """ clusters = defaultdict(list) if len(vectors) > 1: try: results = hierarchy.fclusterdata( vectors.values(), threshold, metric='cosine') except ValueError, e: self.werror("_hclustering: %s" % str(e)) return False for i, featureID in enumerate( vectors.keys()):

    Read the article

  • How to interpolate hue values in HSV colour space?

    - by nick
    Hi, I'm trying to interpolate between two colours in HSV colour space to produce a smooth colour gradient. I'm using a linear interpolation, eg: h = (1 - p) * h1 + p * h2 s = (1 - p) * s1 + p * s2 v = (1 - p) * v1 + p * v2 (where p is the percentage, and h1, h2, s1, s2, v1, v2 are the hue, saturation and value components of the two colours) This produces a good result for s and v but not for h. As the hue component is an angle, the calculation needs to work out the shortest distance between h1 and h2 and then do the interpolation in the right direction (either clockwise or anti-clockwise). What formula or algorithm should I use? EDIT: By following Jack's suggestions I modified my JavaScript gradient function and it works well. For anyone interested, here's what I ended up with: // create gradient from yellow to red to black with 100 steps var gradient = hsbGradient(100, [{h:0.14, s:0.5, b:1}, {h:0, s:1, b:1}, {h:0, s:1, b:0}]); function hsbGradient(steps, colours) { var parts = colours.length - 1; var gradient = new Array(steps); var gradientIndex = 0; var partSteps = Math.floor(steps / parts); var remainder = steps - (partSteps * parts); for (var col = 0; col < parts; col++) { // get colours var c1 = colours[col], c2 = colours[col + 1]; // determine clockwise and counter-clockwise distance between hues var distCCW = (c1.h >= c2.h) ? c1.h - c2.h : 1 + c1.h - c2.h; distCW = (c1.h >= c2.h) ? 1 + c2.h - c1.h : c2.h - c1.h; // ensure we get the right number of steps by adding remainder to final part if (col == parts - 1) partSteps += remainder; // make gradient for this part for (var step = 0; step < partSteps; step ++) { var p = step / partSteps; // interpolate h var h = (distCW <= distCCW) ? c1.h + (distCW * p) : c1.h - (distCCW * p); if (h < 0) h = 1 + h; if (h > 1) h = h - 1; // interpolate s, b var s = (1 - p) * c1.s + p * c2.s; var b = (1 - p) * c1.b + p * c2.b; // add to gradient array gradient[gradientIndex] = {h:h, s:s, b:b}; gradientIndex ++; } } return gradient; }

    Read the article

  • nhibernate subclass in code

    - by Antonio Nakic Alfirevic
    I would like to set up table-per-classhierarchy inheritance in nhibernate thru code. Everything else is set in XML mapping files except the subclasses. If i up the subclasses in xml all is well, but not from code. This is the code i use - my concrete subclass never gets created:( //the call NHibernate.Cfg.Configuration config = new NHibernate.Cfg.Configuration(); SetSubclass(config, typeof(TAction), typeof(tActionSub1), "Procedure"); //the method public static void SetSubclass(Configuration configuration, Type baseClass, Type subClass, string discriminatorValue) { PersistentClass persBaseClass = configuration.ClassMappings.Where(cm => cm.MappedClass == baseClass).Single(); SingleTableSubclass persSubClass = new SingleTableSubclass(persBaseClass); persSubClass.ClassName = subClass.AssemblyQualifiedName; persSubClass.DiscriminatorValue = discriminatorValue; persSubClass.EntityPersisterClass = typeof(SingleTableEntityPersister); persSubClass.ProxyInterfaceName = (subClass).AssemblyQualifiedName; persSubClass.NodeName = subClass.Name; persSubClass.EntityName = subClass.FullName; persBaseClass.AddSubclass(persSubClass); } the Xml mapping looks like this: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="Riz.Pcm.Domain.BusinessObjects" assembly="Riz.Pcm.Domain"> <class name="Riz.Pcm.Domain.BusinessObjects.TAction, Riz.Pcm.Domain" table="dbo.tAction" lazy="true"> <id name="Id" column="ID"> <generator class="guid" /> </id> <discriminator type="String" formula="(select jt.Name from TJobType jt where jt.Id=JobTypeId)" insert="true" force="false"/> <many-to-one name="Session" column="SessionID" class="TSession" /> <property name="Order" column="Order1" /> <property name="ProcessStart" column="ProcessStart" /> <property name="ProcessEnd" column="ProcessEnd" /> <property name="Status" column="Status" /> <many-to-one name="JobType" column="JobTypeID" class="TJobType" /> <many-to-one name="Unit" column="UnitID" class="TUnit" /> <bag name="TActionProperties" lazy="true" cascade="all-delete-orphan" inverse="true" > <key column="ActionID"></key> <one-to-many class="TActionProperty"></one-to-many> </bag> <!--<subclass name="Riz.Pcm.Domain.tActionSub" discriminator-value="ZPower"></subclass>--> </class> </hibernate-mapping> What am I doing wrong? I can't find any examples on google:(

    Read the article

  • Help with a logic problem

    - by Stradigos
    I'm having a great deal of difficulty trying to figure out the logic behind this problem. I have developed everything else, but I really could use some help, any sort of help, on the part I'm stuck on. Back story: *A group of actors waits in a circle. They "count off" by various amounts. The last few to audition are thought to have the best chance of getting the parts and becoming stars. Instead of actors having names, they are identified by numbers. The "Audition Order" in the table tells, reading left-to-right, the "names" of the actors who will be auditioned in the order they will perform.* Sample output: etc, all the way up to 10. What I have so far: using System; using System.Collections; using System.Text; namespace The_Last_Survivor { class Program { static void Main(string[] args) { //Declare Variables int NumOfActors = 0; System.DateTime dt = System.DateTime.Now; int interval = 3; ArrayList Ring = new ArrayList(10); //Header Console.Out.WriteLine("Actors\tNumber\tOrder"); //Add Actors for (int x = 1; x < 11; x++) { NumOfActors++; Ring.Insert((x - 1), new Actor(x)); foreach (Actor i in Ring) { Console.Out.WriteLine("{0}\t{1}\t{2}", NumOfActors, i, i.Order(interval, x)); } Console.Out.WriteLine("\n"); } Console.In.Read(); } public class Actor { //Variables protected int Number; //Constructor public Actor(int num) { Number = num; } //Order in circle public string Order(int inter, int num) { //Variable string result = ""; ArrayList myArray = new ArrayList(num); //Filling Array for (int i = 0; i < num; i++) myArray.Add(i + 1); //Formula foreach (int element in myArray) { if (element == inter) { result += String.Format(" {0}", element); myArray.RemoveAt(element); } } return result; } //String override public override string ToString() { return String.Format("{0}", Number); } } } } The part I'm stuck on is getting some math going that does this: Can anyone offer some guidance and/or sample code?

    Read the article

  • Writing csv files with python with exact formatting parameters

    - by Ben Harrison
    I'm having trouble with processing some csv data files for a project. The project's programmer has moved onto greener pastures, and now I'm trying to finish the data analysis up (I did/do the statistical analysis.) The programmer suggested using python/csv reader to help break down the files, which I've had some success with, but not in a way I can use. This code is a little different from what I was trying before. I am essentially attempting to create an array. In the raw data format, the first 7 rows contain no data, and then each column contains 50 experiments, each with 4000 rows, for 200000 some rows total. What I want to do is take each column, and make it an individual csv file, with each experiment in its own column. So it would be an array of 50 columns and 4000 rows for each data type. The code here does break down the correct values, I think the logic is okay, but it is breaking down the opposite of how I want it. I want the separators without quotes (the commas and spaces) and I want the element values in quotes. Right now it is doing just the opposite for both, element values with no quotes, and the separators in quotes. I've spent several hours trying to figure out how to do this to no avail, import csv ifile = open('00_follow_maverick.csv') epistemicfile = open('00_follower_maverick_EP.csv', 'w') reader = csv.reader(ifile) colnum = 0 rownum = 0 y = 0 z = 8 for column in reader: rownum = 4000 * y + z for element in column: writer = csv.writer(epistemicfile) if y <= 50: y = y + 1 writer.writerow([element]) writer.writerow(',') rownum = x * y + z if y > 50: y = 0 z = z + 1 writer.writerow(' ') rownum = x * y + z if z >= 4008: break What is going on: I am taking each row in the raw data file in iterations of 4000, so that I can separate them with commas for the 50 experiments. When y, the experiment indicator here, reaches 50, it resets back to experiment 0, and adds 1 to z, which tells it which row to look at, by the formula of 4000 * y + z. When it completes the rows for all 50 experiments, it is finished. The problem here is that I don't know how to get python to write the actual values in quotes, and my separators outside of quotes. Any help will be most appreciated. Apologies if this seems a stupid question, I have no programming experience, this is my first attempt ever. Thank you.

    Read the article

  • R glm standard error estimate differences to SAS PROC GENMOD

    - by Michelle
    I am converting a SAS PROC GENMOD example into R, using glm in R. The SAS code was: proc genmod data=data0 namelen=30; model boxcoxy=boxcoxxy ~ AGEGRP4 + AGEGRP5 + AGEGRP6 + AGEGRP7 + AGEGRP8 + RACE1 + RACE3 + WEEKEND + SEQ/dist=normal; FREQ REPLICATE_VAR; run; My R code is: parmsg2 <- glm(boxcoxxy ~ AGEGRP4 + AGEGRP5 + AGEGRP6 + AGEGRP7 + AGEGRP8 + RACE1 + RACE3 + WEEKEND + SEQ , data=data0, family=gaussian, weights = REPLICATE_VAR) When I use summary(parmsg2) I get the same coefficient estimates as in SAS, but my standard errors are wildly different. The summary output from SAS is: Name df Estimate StdErr LowerWaldCL UpperWaldCL ChiSq ProbChiSq Intercept 1 6.5007436 .00078884 6.4991975 6.5022897 67911982 0 agegrp4 1 .64607262 .00105425 .64400633 .64813891 375556.79 0 agegrp5 1 .4191395 .00089722 .41738099 .42089802 218233.76 0 agegrp6 1 -.22518765 .00083118 -.22681672 -.22355857 73401.113 0 agegrp7 1 -1.7445189 .00087569 -1.7462352 -1.7428026 3968762.2 0 agegrp8 1 -2.2908855 .00109766 -2.2930369 -2.2887342 4355849.4 0 race1 1 -.13454883 .00080672 -.13612997 -.13296769 27817.29 0 race3 1 -.20607036 .00070966 -.20746127 -.20467944 84319.131 0 weekend 1 .0327884 .00044731 .0319117 .03366511 5373.1931 0 seq2 1 -.47509583 .00047337 -.47602363 -.47416804 1007291.3 0 Scale 1 2.9328613 .00015586 2.9325559 2.9331668 -127 The summary output from R is: Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 6.50074 0.10354 62.785 < 2e-16 AGEGRP4 0.64607 0.13838 4.669 3.07e-06 AGEGRP5 0.41914 0.11776 3.559 0.000374 AGEGRP6 -0.22519 0.10910 -2.064 0.039031 AGEGRP7 -1.74452 0.11494 -15.178 < 2e-16 AGEGRP8 -2.29089 0.14407 -15.901 < 2e-16 RACE1 -0.13455 0.10589 -1.271 0.203865 RACE3 -0.20607 0.09315 -2.212 0.026967 WEEKEND 0.03279 0.05871 0.558 0.576535 SEQ -0.47510 0.06213 -7.646 2.25e-14 The importance of the difference in the standard errors is that the SAS coefficients are all statistically significant, but the RACE1 and WEEKEND coefficients in the R output are not. I have found a formula to calculate the Wald confidence intervals in R, but this is pointless given the difference in the standard errors, as I will not get the same results. Apparently SAS uses a ridge-stabilized Newton-Raphson algorithm for its estimates, which are ML. The information I read about the glm function in R is that the results should be equivalent to ML. What can I do to change my estimation procedure in R so that I get the equivalent coefficents and standard error estimates that were produced in SAS? To update, thanks to Spacedman's answer, I used weights because the data are from individuals in a dietary survey, and REPLICATE_VAR is a balanced repeated replication weight, that is an integer (and quite large, in the order of 1000s or 10000s). The website that describes the weight is here. I don't know why the FREQ rather than the WEIGHT command was used in SAS. I will now test by expanding the number of observations using REPLICATE_VAR and rerunning the analysis.

    Read the article

  • Why is the W3C box model considered better?

    - by Mel
    Why do most developers consider the W3C box-model to be better than the box-model used by Internet Explorer? I know it's very frustrating developing pages that look the way you want them on Internet Explorer, but I find the W3C box-model to be counter-intuitive. For example, if margins, padding, and border were factored into the width, I could assign fixed width values for all my columns without having to worry about how many columns I have and what changes I make to my padding and margins. Under W3C box model I have to worry about how many columns I have, and develop something akin to a mathematical formula to calculate my values. Changing them would nightmarish, especially for complex layouts. My novice opinion is that it's more restrictive. Consider this small frame-work I wrote: #content { margin:0 auto 30px auto; padding:0 30px 30px 30px; width:900px; } #content .column { float:left; margin:0 20px 20px 20px; } #content .first { margin-left:0; } #content .last { margin-right:0; } .width_1-4 { width:195px; } .width_1-3 { width:273px; } .width_1-2 { width:430px; } .width_3-4 { width:645px; } .width_1-1 { width:900px; } These values I have assigned here will falter unless there are three columns, and thus the margins at 0(first)+20+20+20+20+0(last). It would be a disaster if I wanted to add padding to my columns too, as my entire setup would have to be re calibrated. Now imagine if column width incorporated all the other elements, all I would need to do is change one associated value, and I have my layout. I'm less criticizing it and more hoping to understand why it's better, or why I'm finding it more difficult. Am I doing this whole thing wrong? I don't know very much about this topic, but it seems counter intuitive to use W3C's box-model. Some advice would be really appreciated. Thanks

    Read the article

  • shipping and handling fee calculation

    - by Newb
    Here is the question: Many companies normally charge a shipping and handling fee for purchases. Create a Web page that allows a user to enter a purchase price into a text box - include a JavaScript function that calculates shipping and handling. Add functionality to the script that adds a minimum shipping and handling fee of $1.50 for any purchase that is less than or equal to $25.00. For any orders over $25.00, add 10% to the total purchase price for shipping and handling, but do not include the $1.50 minimum shipping and handling fee. After you determine the total cost of the order (purchase plus shipping and handling), display it in an alert dialog box. I am beginner at JavaScript and struggling to get my code to work. It does display an alert box with the value entered by the user but doesn't add anything. Although, I don't know why the formula doesn't work. Please help. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Calculating Shipping & Handling</title> <script type="text/javascript"> /* <![CDATA[ */ var price=[]; var shipping=[]; var total=price+shipping; function calculateShipping(){ if (price <= 25){ shipping = (price + 1.5); } else { shipping = (price * 10 / 100); } window.alert("The purchase price with shipping is " + document.calculate.ent.value); } /* ]]> */ </script> </head> <body> <form name ="calculate" action="" > <p>Enter Purchase Price</p> <input type="text" name="ent" > <input type="button" name="button" value="Submit" onClick="calculateShipping()" /> </form> </body> </html>

    Read the article

  • Loading the last related record instantly for multiple parent records using Entity framework

    - by Guillaume Schuermans
    Does anyone know a good approach using Entity Framework for the problem described below? I am trying for our next release to come up with a performant way to show the placed orders for the logged on customer. Of course paging is always a good technique to use when a lot of data is available I would like to see an answer without any paging techniques. Here's the story: a customer places an order which gets an orderstatus = PENDING. Depending on some strategy we move that order up the chain in order to get it APPROVED. Every change of status is logged so we can see a trace for statusses and maybe even an extra line of comment per status which can provide some extra valuable information to whoever sees this order in an interface. So an Order is linked to a Customer. One order can have multiple orderstatusses stored in OrderStatusHistory. In my testscenario I am using a customer which has 100+ Orders each with about 5 records in the OrderStatusHistory-table. I would for now like to see all orders in one page not using paging where for each Order I show the last relevant Status and the extra comment (if there is any for this last status; both fields coming from OrderStatusHistory; the record with the highest Id for the given OrderId). There are multiple scenarios I have tried, but I would like to see any potential other solutions or comments on the things I have already tried. Trying to do Include() when getting Orders but this still results in multiple queries launched on the database. Each order triggers an extra query to the database to get all orderstatusses in the history table. So all statusses are queried here instead of just returning the last relevant one, plus 100 extra queries are launched for 100 orders. You can imagine the problem when there are 100000+ orders in the database. Having 2 computed columns on the database: LastStatus, LastStatusInformation and a regular Linq-Query which gets those columns which are available through the Entity-model. The problem with this approach is the fact that those computed columns are determined using a scalar function which can not be changed without removing the formula from the computed column, etc... In the end I am very familiar with SQL and Stored procedures, but since the rest of the data-layer uses Entity Framework I would like to stick to it as long as possible, even though I have my doubts about performance. Using the SQL approach I would write something like this: WITH cte (RN, OrderId, [Status], Information) AS ( SELECT ROW_NUMBER() OVER (PARTITION BY OrderId ORDER BY Id DESC), OrderId, [Status], Information FROM OrderStatus ) SELECT o.Id, cte.[Status], cte.Information AS StatusInformation, o.* FROM [Order] o INNER JOIN cte ON o.Id = cte.OrderId AND cte.RN = 1 WHERE CustomerId = @CustomerId ORDER BY 1 DESC; which returns all orders for the customer with the statusinformation provided by the Common Table Expression. Does anyone know a good approach using Entity Framework?

    Read the article

  • How to get colliding effect or bouncy when ball hits the track.

    - by Chandan Shetty SP
    I am using below formula to move the ball circular, where accelX and accelY are the values from accelerometer, it is working fine. But the problem in this code is mRadius (I fixed its value to 50), i need to change mRadius according to accelerometer values and also i need bouncing effect when it touches the track. Currently i am developing code by assuming only one ball is on the board. float degrees = -atan2(accelX, accelY) * 180 / 3.14159; int x = cCentrePoint.x + mRadius * cos(degreesToRadians(degrees)); int y = cCentrePoint.y + mRadius * sin(degreesToRadians(degrees)); Here is the snap of the game i want to develop: Updated: I am sending the updated code... mRadius = 5; mRange = NSMakeRange(0,60); -(void) updateBall: (UIAccelerationValue) accelX withY:(UIAccelerationValue)accelY { float degrees = -atan2(accelX, accelY) * 180 / 3.14159; int x = cCentrePoint.x + mRadius * cos(degreesToRadians(degrees)); int y = cCentrePoint.y + mRadius * sin(degreesToRadians(degrees)); //self.targetRect is rect of ball Object self.targetRect = CGRectMake(newX, newY, 8, 9); self.currentRect = self.targetRect; //http://books.google.co.in/books?id=WV9glgdrrrUC&pg=PA455#v=onepage&q=&f=false static NSDate *lastDrawTime; if(lastDrawTime!=nil) { NSTimeInterval secondsSinceLastDraw = -([lastDrawTime timeIntervalSinceNow]); ballXVelocity = ballXVelocity + (accelX * secondsSinceLastDraw) * [self isTouchedTrack:mRadius andRange:mRange]; ballYVelocity = ballYVelocity + -(accelY * secondsSinceLastDraw) * [self isTouchedTrack:mRadius andRange:mRange]; distXTravelled = distXTravelled + secondsSinceLastDraw * ballXVelocity * 50; distYTravelled = distYTravelled + secondsSinceLastDraw * ballYVelocity * 50; CGRect temp = self.targetRect; temp.origin.x += distXTravelled; temp.origin.y += distYTravelled; int radius = (temp.origin.x - cCentrePoint.x) / cos(degreesToRadians(degrees)); if( !NSLocationInRange(abs(radius),mRange)) { //Colided with the tracks...Need a better logic here ballXVelocity = -ballXVelocity; } else { // Need a better logic here self.targetRect = temp; } //NSLog(@"angle = %f",degrees); } [lastDrawTime release]; lastDrawTime = [ [NSDate alloc] init]; } In the above code i have initialized mRadius and mRange(indicate track) to some constant for testing, i am not getting the moving of the ball as i expected( bouncing effect when Collided with track ) with respect to accelerometer. Help me to recognize where i went wrong or send some code snippets or links which does the similar job. I am searching for better logic than my code, if you found share with me.

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >