Search Results

Search found 387 results on 16 pages for 'ish'.

Page 14/16 | < Previous Page | 10 11 12 13 14 15 16  | Next Page >

  • Google Chrome issues

    - by Ben Hooper
    I'm an avid user of Chrome and would defend it to the death but there is no denying that it has issues, and I've been experiencing a lot of them recently. Issues Stuck tabs Issue: Around 60% of the time (higher when the system has just started), when Chrome is launched, some or all of the pinned tabs and startup pages will get stuck in the counter-clockwise-rotating "contacting server" mode and never snap out of it. Fix: Quit and launch Chrome until they load properly (this really isn't good, as quitting and launching Chrome can and has cleared all pinned tabs). Extra Information: If you stop the loading and re-enter the URL then the page will load perfectly. The amount of pinned tabs or startup pages seems to be irrelevant, but I could be wrong.   "Downloaded/out of" Issue: Each item in the download bar has a downloaded status section, formatted as "downloaded/out of". Sometimes Chrome doesn't display the "out of" part.   Collapsed settings windows Issue: In Chrome version 19(ish)+, settings are configured via overlayed / popup windows. Sometimes, the window will open fully collapsed. Fix: Resize Chrome's window or open the developer tools. "Network Error" with large downloads Issue: Sometimes, when downloading large files (500MB+) Chrome will download the entire file, the download status will freeze (for example, "1 GB/1 GB") for a few minutes, report "Network error", and delete the .crdownload file. Extra Information: The same file from the same website on the same computer downloads perfectly in other browsers. The website and file type seem to be irrelevant.   Information: I've experienced some of these issues on my home PC and some on my work PC, both of which are Windows 7 Ultimate x64. The only things that link them are my Google account (all settings are synced). Updating Chrome hasn't worked. Most of these issues presented themselves around about version 17 and have continued right through to 21 (current). Uninstalling Chrome, deleting all data in %programFiles(x86)%\Google and %localAppData%\Google, and reinstalling hasn't worked. I have yet to see whether disabling all extensions would make a difference, but it's hard to diagnose as these issues don't occur 100% of the time.   In my case, I don't know if there's an actual solution. I'm just curious as to whether anyone else is having similar issues to those that I'm experiencing. At least then I'll know it's an issue with Chrome itself.

    Read the article

  • Deployment/provisioning tool for commercial applications (not developed in-house)

    - by mfinni
    I help manage a few hosted commercial applications, and we have a lot of manual processes involved when doing new customer-instance deployments into the shared (multitenant) environment. Allow me to describe the most relevant features, and then we can talk about the tools. We have an application on AIX, that requires dozens of changes to config files (some plain text, some XML) as well as a good number of commands to be run on multiple servers - some to start the new instance, some to restart our shared authentication and reporting engines, etc. The config changes follow templates, of course. The servers in question will also depend on the initial conditions specified by the implementer/deployer - we may choose to deploy a given customer to our servers in Europe, or one set of servers may be active-active whereas a different set of servers is active-passive - in short, there's a lot of complications. We have another application that run on IIS 6 and SQL. The DBAs don't want any automation of the SQL components and that's fine with me, but automating the IIS bit would be great. For a new customer instance, we make a filesystem copy of a template Virtual Directory target named after the new customer, make a new AppPool to match, edit a VirDir template .xml file to replace the filepaths and AppPool names with the new ones, and then make a new VirDir from the modified template XML to point to the new filesystem folder and app pool. For the first case, something like ControlTier or Chef might be good. For the second, the new(ish) Web Deploy from MS would probably do a good job. Has anyone used these tools or others to do something similar for applications? More of a nice-to-have, not a fixed requirement - Has anyone used anything that works on both platforms? I'm looking for something free, because the official word is that within a year, we will have whatever HP has renamed the OpsWare suite, which should be able to do stuff like this. Edit - based on someone's suggestion, looking at CFengine for the AIX application, it doesn't seem to address my pain. The problem isn't keeping a given config synced across dozens of servers, we have rsync for that. The problem is that onboarding a new customer instance touches dozens of files, putting pieces of the same or similar information into them - some are new stanzas in existing files, some are new files, and some are new directories. This is a several-hours-long process that is also error-prone because it's mostly done by hand. I guess I'm looking for config-file generation and management. I have built a small Perl script to do something similar for a much smaller case - it binds a CSV file into variables, and then does a copy-and-search-and-replace from a set of template config files. I could probably do the same here.

    Read the article

  • System won't boot: Gigabyte HD 7790 1GB OC GPU issue or Corsair VS550 PSU issue?

    - by MGOwen
    Installed a new GPU, and PC won't boot. Turn it on and: No monitor signal at all (tried HDMI and VGA via DVI, on 2 working monitors). CPU and GPU fans DO spin, but No system beeps, no sounds from drives (they might make a small noise in the first 1 second or so, but there's definitely no OS loading or anything like that) If hit "power off" button it turns off immediately (no holding down for 3 seconds like usual) If I put my old HD 5670 GPU back in, everything works fine. But (plot twist!) card is not totally dead. My friend put it in his PC, and it works fine (he even played a game for 15 minutes, no issues). He has a Corsair TX850 850W and a Gigabyte MB. So my main theory is: the GPU isn't getting enough power from the PSU. But is it: Bad PSU? Seems unlikely, since it works fine with the other GPU. Also, the PSU Is brand new and 550W (single 42A/504W 12V rail). Overkill for this GPU. Corsair is a decent brand, but maybe just mine is faulty? Bad GPU? Could it be drawing more power than it should be, somehow, or something? Supposedly HD 7790 needs only 21A/75W on the 12v rail, though this one is factory overclocked a bit... but should that triple the power requirement? Something else? Could there be a motherboard incompatibility somehow? Both MB and GPU are less than a year old and PCI Express 3.0 x16. Things I've tried: Re-seating the video card Testing PC with old GPU (works fine, same PCIe slot). Checked AMD's stated amp/watt requirements of a 7790 and my PSU (see above). My PSU can output twice the amps (single rail) and 5x the Wattage a 7790 needs. Here are the full specs: Gigabyte HD 7790 1GB OC GPU Corsair VS550 550W PSU 4GB RAM AsRock H61M U3S3 motherboard i3-2100 500GB SATA HDD (2007-ish) blu-ray drive (new) PCI 802.11g card Edit: Motherboard BIOS Update seems to have fixed it. (If anyone has same problem and it doesn't work, comment here).

    Read the article

  • SQL Developer Blitz at ODTUG Kscope12

    - by thatjeffsmith
    Oracle Development Tools User Group (ODTUG) puts on an outstanding event, and I enjoy that the content comes FIRST. Yes, the after-event parties and entertainment are first class, but I look forward most to sitting in on some excellent sessions. For Kscope12 one would expect Oracle to have a large presence, and you would be absolutely correct! The APEX team will be there in full force, and we’ll have sessions on JDeveloper, ADF, and .NET. But what I want to talk about today is our awesome line-up of coverage for Oracle SQL Developer (Surprise!) DB and Developer’s Toolbox Symposium Kris Rice or @krisrice, Product Development Manager for SQL Developer, will speak at 10AM Sunday about SQL Developer Data Modeler. Our free data modeling solution allows one to reverse engineer a data dictionary to a model, modify it, and create a script of the changes. Collaboration is an important part of any development team; with built-in subversion support, the modeler makes collaboration easy, not just possible. After the morning break, I’ll be talking about SQL Developer’s PL/SQL support. From creating your code, to debugging, tuning, testing, and documenting PL/SQL – SQL Developer fits the bill. Since I have a full hour, I should have time to do a little riff on using source control to version and manage your revisions too! At 3:15 Jagan Athreya will talk about the new integration between SQL Developer and Enterprise Manager Cloud Control 12c. Enabling developers to define changes in SQLDeveloper and allowing DBAs to promote these changes to Test and Production via Enterprise Manager will reduce errors, accelerate productivity, and help eliminate unplanned downtime. Get your SQL Developer groove on at ODTUG Kscope12! Presentations SQL Developer Tips and Tricks Monday June 25, Session 5, 4:15 pm – 5:15 pm I’ll take you through my favorite keyboard shortcuts, top 10 preferences every user should tweak, and spotlight features that the average user probably hasn’t discovered yet. My goal for this session is for everyone to take 1-2 tips they can implement immediately to save mucho time. I enjoy interacting with the audience so no two versions of this presentation are the same. Oracle SQL Developer and Data Modeler New Features When: Tuesday June 26, Session 6, 8:30 am – 9:30 am Ashley Chen, my PM-partner-in-crime, will be covering all the new features from our two latest updates. So if you’re new to SQL Developer, or you’ve been using an older version, stop by and see what new toys you have to play with. I also have a bet with Ashley that she will have more attendees than me, so be sure to show up so I can collect. Debugging PL/SQL With SQL Developer When: Wednesday June 27, Session 16, 3:00 pm – 4:00 pm Me again – sorry. This time I have an entire hour to JUST talk about PL/SQL and debugging! Should you use a watch with a break condition, or a breakpoint with a passcount? How does external debugging with a Perl script work? Can I just debug an anonymous PL/SQL block. So if debugging to you is just a DBMS_OUTPUT.PUT_LINE() call, stop by and see how our IDE can help you take things to the next level! Or is that level++? Hands-on-Training SQL Developer Soup to Nuts When: Tuesday, 8:00 AM – 9:30 AM If you learn by doing, this is the session for you. Bring your own laptop or use one of the lab machines. We’ll give you a VirtualBox OEL image running 11gR2 EE Database with all the fixin’s (that’s Southern speak for Partitioning, Advanced Compression, Tuning & Diagnostic Packs, etc), TimesTen, APEX and much more. All you have to do is login and run through our lab exercises. You can start with a model and work your way up to debugging and testing your own appliction, or you can pick and choose your lessons to suit your needs. We’ll have people on hand to help you out and answer your questions. Booth Hours We’ll be in the vendor area and have our very own ‘demo pod’ for SQL Developer. Between Kris, Ashley, and I we should be able to answer your questions or show you how to ‘do that thing’ in the tool. Or just stop by and say hello! We’ll be around the following hours’ish: Sunday, June 24, 2012 6:00 PM – 8:00 PM Monday, June 25, 2012 9:00 AM – 4:30 PM Tuesday, June 26, 2012 9:30 AM – 3:30 PM Wednesday, June 27, 2012 10:15 AM – 2:00 PM No Excuses – If You Have Questions, This is Your Chance to Get Your Answers! We’re doing just about everything outside of a scavenger hunt to bring information and value to our users. Let us know what you like, what you don’t like, and we’ll do our best to do more of the former and less of the latter!

    Read the article

  • Adopting DBVCS

    - by Wes McClure
    Identify early adopters Pick a small project with a small(ish) team.  This can be a legacy application or a green-field application. Strive to find a team of early adopters that will be eager to try something new. Get the team on board! Research Research the tool(s) that you want to use.  Some tools provide all of the features you would need while some only provide a slice of the pie.  DBVCS requires the ability to manage a set of change scripts that update a database from one version to the next.  Ideally a tool can track database versions and automatically apply updates.  The change script generation process can be manual, but having diff tools available to automatically generate it can really reduce the overhead to adoption.  Finally, an automated tool to generate a script file per database object is an added bonus as your version control system can quickly identify what was changed in a commit (add/del/modify), just like with code changes. Don’t settle on just one tool, identify several.  Then work with the team to evaluate the tools.  Have the team do some tests of the following scenarios with each tool: Baseline an existing database: can the migration tool work with legacy databases?  Caution: most migration platforms do not support baselines or have poor support, especially the fad of fluent APIs. Add/drop tables Add/drop procedures/functions/views Alter tables (rename columns, add columns, remove columns) Massage data – migrations sometimes involve changing data types that cannot be implicitly casted and require you to decide how the data is explicitly cast to the new type.  This is a requirement for a migrations platform.  Think about a case where you might want to combine fields, or move a field from one table to another, you wouldn’t want to lose the data. Run the tool via the command line.  If you cannot automate the tool in Continuous Integration what is the point? Create a copy of a database on demand. Backup/restore databases locally. Let the team give feedback and decide together, what tool they would like to try out. My recommendation at this point would be to include TSqlMigrations and RoundHouse as SQL based migration platforms.  In general I would recommend staying away from the fluent platforms as they often lack baseline capabilities and add overhead to learn a new API when SQL is already a very well known DSL.  Code migrations often get messy with procedures/views/functions as these have to be created with SQL and aren’t cross platform anyways.  IMO stick to SQL based migrations. Reconciling Production If your project is a legacy application, you will need to reconcile the current state of production with your development databases.  Find changes in production and bring them down to development, even if they are old and need to be removed.  Once complete, produce a baseline of either dev or prod as they are now in sync.  Commit this to your VCS of choice. Add whatever schema changes tracking mechanism your tool requires to your development database.  This often requires adding a table to track the schema version of that database.  Your tool should support doing this for you.  You can add this table to production when you do your next release. Script out any changes currently in dev.  Remove production artifacts that you brought down during reconciliation.  Add change scripts for any outstanding changes in dev since the last production release.  Commit these to your repository.   Say No to Shared Dev DBs Simply put, you wouldn’t dream of sharing a code checkout, why would you share a development database?  If you have a shared dev database, back it up, distribute the backups and take the shared version offline (including the dev db server once all projects are using DB VCS).  Doing DB VCS with a shared database is bound to cause problems as people won’t be able to easily script out their own changes from those that others are working on.   First prod release Copy prod to your beta/testing environment.  Add the schema changes table (or mechanism) and do a test run of your changes.  If successful you can schedule this to be run on production.   Evaluation After your first release, evaluate the pain points of the process.  Try to find tools or modifications to existing tools to help fix them.  Don’t leave stones unturned, iteratively evolve your tools and practices to make the process as seamless as possible.  This is why I suggest open source alternatives.  Nothing is set in stone, a good example was adding transactional support to TSqlMigrations.  We ran into situations where an update would break a database, so I added a feature to do transactional updates and rollback on errors!  Another good example is generating change scripts.  We have been manually making these for months now.  I found an open source project called Open DB Diff and integrated this with TSqlMigrations.  These were things we just accepted at the time when we began adopting our tool set.  Once we became comfortable with the base functionality, it was time to start automating more of the process.  Just like anything else with development, never be afraid to try to find tools to make your job easier!   Enjoy -Wes

    Read the article

  • Windows 8 Live Accounts and the actual Windows Account

    - by Rick Strahl
    As if Windows Security wasn't confusing enough, in Windows 8 we get thrown yet another curve ball with Windows Live accounts to logon. When I set up my Windows 8 machine I originally set it up with a 'real', non-live account that I always use on my Windows machines. I did this mainly so I have a matching account for resources around my home and intranet network so I could log on to network resources properly. At some point later I decided to set up Windows Live security just to see how changes things. Windows wants you to use Windows Live Windows 8 logins are required in order for the Windows RT account info to work. Not that I care - since installing Windows 8 I've maybe spent 10 minutes with Windows RT because - well it's pretty freaking sucky on the desktop. From shitty apps to mis-managed screen real estate I can't say that there's anything compelling there to date, but then I haven't looked that hard either. Anyway… I set up the Windows Live account to see if that changes things. It does - I do get all my live logins to work from Live Account so that Twitter and Facebook posts and pictures and calendars all show up on live tiles on the start screen and in the actual apps. That's nice-ish, but hardly that exciting given that all of the apps tied to those live tiles are average at best. And it would have been nice if all of this could be done without being forced into running with a Windows Live User Account - this all feels like strong-arming you into moving into Microsofts walled garden… and that's probably what it's meant to do. Who am I? The real problem to me though is that these Windows Live and raw Windows User accounts are a bit unpredictable especially when it comes to developer information about the account and which credentials to use. So for example Windows reports folder security like this: Notice it's showing my Windows Live account. Now if I go to Edit and try to add my Windows user account (rstrahl) it'll just automatically show up as the live account. On the other hand though the underlying system sees everything as my real Windows account. After I switched to a Windows Live login account and I have to login to Windows with my Live account, what do you suppose this returns?Console.WriteLine(Environment.UserName); It returns my raw Windows user account (rstrahl). All my permissions, all my actual settings and the desktop console altogether run under that account. If I look in TaskManager (or Process Explorer for me) I see: Everything running on the desktop shell with my login running under my Windows user account. I suppose it makes sense, but where is that association happening? When I switched to a Windows Live account, nowhere did I associate my real account with the Live account - it just happened. And looking through the account configuration dialogs I can't find any reference to the raw Windows account. Other than switching back I see no mention anywhere of the raw Windows account - everything refers to the Live account. Right then, clear as potato soup! So this is who you really are! The problem is that in some situations this schizophrenic account behavior gets a bit weird. Today I was running a local Web application in IIS that uses Windows Authentication - I tried to log-in with my real Windows account login because that's what I'm used to using with WINDOWS freaking Authentication through IIS. But… it failed. I checked my IIS settings, my apps login settings and I just could not for the life of me get into the site with my Windows username. That is until I finally realized that I should try using my Windows Live credentials instead. And that worked. So now in this Windows Authentication dialog I had to type in my Live ID and password, which is - just weird. Then in IIS if I look at a Trace page (or in my case my app's Status page) I see that the logged on account is - my Windows user account. What's really annoying about this is that in some places it uses the live account in other places it uses my Windows account. If I remote desktop into my Web server online - I have to use the local authentication dialog but I have to put in my real Windows credentials not the Live account. Oh yes, it's all so terribly intuitive and logical… So in summary, when you log on with a Live account you are actually mapped to an underlying Windows user. In any application if you check the user name it'll be the underlying user account (not sure what happens in a Windows RT app or even what mechanism is used there to get the user name info).  When logging on to local machine resource with user name and password you have to use your Live IDs even if the permissions on the resources are mapped to your underlying Windows account. Easy enough I suppose, but still not exactly intuitive behavior…© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Video games, content strategy, and failure - oh my.

    - by Roger Hart
    Last night was the CS London group's event Content Strategy, Manhattan Style. Yes, it's a terrible title, feeling like a self-conscious grasp for chic, sadly commensurate with the venue. Fortunately, this was not commensurate with the event itself, which was lively, relevant, and engaging. Although mostly if you're a consultant. This is a strong strain in current content strategy discourse, and I think we're going to see it remedied quite soon. Not least in Paris on Friday. A lot of the bloggers, speakers, and commentators in the sphere are consultants, or part of agencies and other consulting organisations. A lot of the talk is about how you sell content strategy to your clients. This is completely acceptable. Of course it is. And it's actually useful if that's something you regularly have to do. To an extent, it's even portable to those of us who have to sell content strategy within an organisation. We're still competing for credibility and resource. What we're doing less is living in the beginning of a project. This was touched on by Jeffrey MacIntyre (albeit in a your-clients kind of a way) who described "the day two problem". Companies, he suggested, build websites for launch day, and forget about the need for them to be ongoing entities. Consultants, agencies, or even internal folks on short projects will live through Day Two quite often: the trainwreck moment where somebody realises that even if the content is right (which it often isn't), and on time (which it often isn't), it'll be redundant, outdated, or inaccurate by the end of the week/month/fickle social media attention cycle. The thing about living through a lot of Day Two is that you see a lot of failure. Nothing succeeds like failure? Failure is good. When it's structured right, it's an awesome tool for learning - that's kind of how video games work. I'm chewing over a whole blog post about this, but basically in game-like learning, you try, fail, go round the loop again. Success eventually yields joy. It's a relatively well-known phenomenon. It works best when that failing step is acutely felt, but extremely inexpensive. Dying in Portal is highly frustrating and surprisingly characterful, but the save-points are well designed and the reload unintrusive. The barrier to re-entry into the loop is very low, as is the cost of your failure out in meatspace. So it's easy (and fun) to learn. Yeah, spot the difference with business failure. As an external content strategist, you get to rock up with a big old folder full of other companies' Day Two (and ongoing day two hundred) failures. You can't send the client round the learning loop - although you may well be there because they've been round it once - but you can show other people's round trip. It's not as compelling, but it's not bad. What about internal content strategists? We can still point to things that are wrong, and there are some very compelling tools at our disposal - content inventories, user testing, and analytics, for instance. But if we're picking up big organically sprawling legacy content, Day Two may well be a distant memory, and the felt experience of web content failure is unlikely to be immediate to many people in the organisation. What to do? My hunch here is that the first task is to create something immediate and felt, but that it probably needs to be a success. Something quickly doable and visible - a content problem solved with a measurable business result. Now, that's a tall order; but scrape of the "quickly" and it's the whole reason we're here. At Red Gate, I've started with the text book fear and passion introduction to content strategy. In fact, I just typo'd that as "contempt strategy", and it isn't a bad description. Yelling "look at this, our website is rubbish!" gets you the initial attention, but it doesn't make you many friends. And if you don't produce something pretty sharp-ish, it's easy to lose the momentum you built up for change. The first thing I've done - after the visual content inventory - is to delete a bunch of stuff. About 70% of the SQL Compare web content has gone, in fact. This is a really, really cheap operation. It's visible, and it's powerful. It's cheap because you don't have to create any new content. It's not free, however, because you do have to validate your deletions. This means analytics, actually reading that content, and talking to people whose business purposes that content has to serve. If nobody outside the company uses it, and nobody inside the company thinks they ought to, that's a no-brainer for the delete list. The payoff here is twofold. There's the nebulous hard-to-illustrate "bad content does user experience and brand damage" argument; and there's the "nobody has to spend time (money) maintaining this now" argument. One or both are easily felt, and the second at least should be measurable. But that's just one approach, and I'd be interested to hear from any other internal content strategy folks about how they get buy-in, maintain momentum, and generally get things done.

    Read the article

  • MSCC: Career & IT Fair 2014

    Already a couple of weeks ago, I've been addressed by Ibraahim and Yunus to see whether it would be interesting to participate in the 1st Career & IT Fair organised by the UoM Computer Club. Well, luckily we met at the Global Windows Azure Bootcamp and I wasn't too sure whether it would be possible for me to attend after all. The main reason is given because of work demand and furthermore due to the fact that the Mauritius Software Craftsmanship Community currently has no advertising material at all. Here's the brief statement of the event: "The UOM Students' Computer Club in collaboration with the UOM Students' Union and UOM CSE Department is organising a 'Career & IT Fair' on the 23rd and 24th April 2014. This event has for objective to provide a platform to tertiary students, secondary students as well as vocational students, the opportunity to meet job recruiters." Luckily, I was reminded that the 23rd is a Wednesday, and therefore I decided that it might be interesting to move our weekly Code & Coffee session to the university and hence be able to attend the career fair. As it turned out it was a great choice and thankfully Pritvi, Nadim as well as Ishwon volunteered to be around at the "community booth". Thankfully, the computer club gave us - the MSCC and the LUGM - one of their spaces in the lobby area of the Paul Octave Wiéhé Auditorium. My impression about the event Very well and professionally organised. Seriously, the lads over at the UoM Computer Club did a great job in organising their 2 days event, and felt very comfortable at any time. Actually, it was kind of amusing to some of the members constantly running around and checking everything. Even though that the whole process went smooth and easy off the hand. There were a couple of interesting pieces of information and announcements during the opening ceremony. For example, the Computer Science faculty is a very young one and has been initiated back in 1988 only - just by 4 staff members at that time. Now, after 25 years they have achieved quite a lot and there are currently 1.000+ active students attending the numerous lectures and courses. But there is no room to rest on previous achievements, and I was kind of surprised to hear that there are plans to extend the campus, and offer new lectures in the fields of nanotechnology, big data handling, and - crossing fingers - the introduction and establishment of a space control centre. Mauritius is already part of the Square Kilometre Array (SKA) and hopefully there will be more activities into that direction in the near future. Community - Awareness and collaboration As stated earlier, I could only spent one morning but luckily other members of the MSCC and the LUGM stayed during the whole two days and provided answers to any interested person. As for me, I took the opportunity to get in touch with the other companies in the lobby. Mainly, to create some awareness about our IT communities but also to see whether there might be options for future engagement in common activities, too. So far, I was able to speak to representatives of the following companies: ACCA Mauritius Business at Work Infomil LinkByNet Microsoft Indian Ocean Islands & French Pacific Spherinity Training Institute Spoon Consulting Ltd. State Informatics Ltd. Unfortunately, I only had a quick chat with an HR representative of LinkByNet but I fully count on our MSCC members like Nitin or LUGM member Ronny to spread our intentions over there.  So far, all of the representatives were really interested in our concepts and activities and I'm currently catching up with an introduction flyer for the MSCC that I'm going to send out to all those contacts via mail. It would be great to have more craftsmen as well as professional support on board. Some pictures from the event MSCC: Fantastic outlook for the near future. Announcements were made on Big data, nanotechnology, and space control centre in Mauritius. Interesting! MSCC: The lobby area was cramped with students. Great way to exchange and network. Good luck to all candidates! Passing the relay staff to... I recommend you to continue to read about the first Career & IT Fair on Ish's blog. He has a great summary and more details on those two days of IT activities than I have. Thanks and feel free to leave a comment (or two)... 

    Read the article

  • Collision Detection on floor tiles Isometric game

    - by Anivrom
    I am having a very hard to time figuring out a bug in my code. It should have taken me 20 minutes but instead I've been working on it for over 12 hours. I am writing a isometric tile based game where the characters can walk freely amongst the tiles, but not be able to cross over to certain tiles that have a collides flag. Sounds easy enough, just check ahead of where the player is going to move using a Screen Coordinates to Tile method and check the tiles array using our returned xy indexes to see if its collidable or not. if its not, then don't move the character. The problem I'm having is my Screen to Tile method isn't spitting out the proper X,Y tile indexes. This method works flawlessly for selecting tiles with the mouse. NOTE: My X tiles go from left to right, and my Y tiles go from up to down. Reversed from some examples on the net. Here's the relevant code: public Vector2 ScreentoTile(Vector2 screenPoint) { //Vector2 is just a object with x and y float properties //camOffsetX,Y are my camera values that I use to shift everything but the //current camera target when the target moves //tilescale = 128, screenheight = 480, the -46 offset is to center // vertically + 16 px for some extra gfx in my tile png Vector2 tileIndex = new Vector2(-1,-1); screenPoint.x -= camOffsetX; screenPoint.y = screenHeight - screenPoint.y - camOffsetY - 46; tileIndex.x = (screenPoint.x / tileScale) + (screenPoint.y / (tileScale / 2)); tileIndex.y = (screenPoint.x / tileScale) - (screenPoint.y / (tileScale / 2)); return tileIndex; } The method that calls this code is: private void checkTileTouched () { if (Gdx.input.justTouched()) { if (last.x >= 0 && last.x < levelWidth && last.y >= 0 && last.y < levelHeight) { if (lastSelectedTile != null) lastSelectedTile.setColor(1, 1, 1, 1); Sprite sprite = levelTiles[(int) last.x][(int) last.y].sprite; sprite.setColor(0, 0.3f, 0, 1); lastSelectedTile = sprite; } } if (touchDown) { float moveX=0,moveY=0; Vector2 pos = new Vector2(); if (player.direction == direction_left) { moveX = -(player.moveSpeed); moveY = -(player.moveSpeed / 2); Gdx.app.log("Movement", String.valueOf("left")); } else if (player.direction == direction_upleft) { moveX = -(player.moveSpeed); moveY = 0; Gdx.app.log("Movement", String.valueOf("upleft")); } else if (player.direction == direction_up) { moveX = -(player.moveSpeed); moveY = player.moveSpeed / 2; Gdx.app.log("Movement", String.valueOf("up")); } else if (player.direction == direction_upright) { moveX = 0; moveY = player.moveSpeed; Gdx.app.log("Movement", String.valueOf("upright")); } else if (player.direction == direction_right) { moveX = player.moveSpeed; moveY = player.moveSpeed / 2; Gdx.app.log("Movement", String.valueOf("right")); } else if (player.direction == direction_downright) { moveX = player.moveSpeed; moveY = 0; Gdx.app.log("Movement", String.valueOf("downright")); } else if (player.direction == direction_down) { moveX = player.moveSpeed; moveY = -(player.moveSpeed / 2); Gdx.app.log("Movement", String.valueOf("down")); } else if (player.direction == direction_downleft) { moveX = 0; moveY = -(player.moveSpeed); Gdx.app.log("Movement", String.valueOf("downleft")); } //Player.moveSpeed is 1 //tileObjects.x is drawn in the center of the screen (400px,240px) // the sprite width is 64, height is 128 testX = moveX * 10; testY = moveY * 10; testX += tileObjects.get(player.zIndex).x + tileObjects.get(player.zIndex).sprite.getWidth() / 2; testY += tileObjects.get(player.zIndex).y + tileObjects.get(player.zIndex).sprite.getHeight() / 2; moveX += tileObjects.get(player.zIndex).x + tileObjects.get(player.zIndex).sprite.getWidth() / 2; moveY += tileObjects.get(player.zIndex).y + tileObjects.get(player.zIndex).sprite.getHeight() / 2; pos = ScreentoTile(new Vector2(moveX,moveY)); Vector2 pos2 = ScreentoTile(new Vector2(testX,testY)); if (!levelTiles[(int) pos2.x][(int) pos2.y].collides) { Vector2 newPlayerPos = ScreentoTile(new Vector2(moveX,moveY)); CenterOnCoord(moveX,moveY); player.tileX = (int)newPlayerPos.x; player.tileY = (int)newPlayerPos.y; } } } When the player is moving to the left (downleft-ish from the viewers point of view), my Pos2 X values decrease as expected but pos2 isnt checking ahead on the x tiles, it is checking ahead on the Y tiles(as if we were moving DOWN, not left), and vice versa, if the player moves down, it will check ahead on the X values (as if we are moving LEFT, instead of DOWN). instead of the Y values. I understand this is probably the most confusing and horribly written post ever, but I'm confused myself so I'm having a hard time explaining it to others lol. if you need more information please ask!! I'm so frustrated after over 12 hours of working on it I'm about to give up.

    Read the article

  • C#/.NET Little Wonders: Static Char Methods

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Often times in our code we deal with the bigger classes and types in the BCL, and occasionally forgot that there are some nice methods on the primitive types as well.  Today we will discuss some of the handy static methods that exist on the char (the C# alias of System.Char) type. The Background I was examining a piece of code this week where I saw the following: 1: // need to get the 5th (offset 4) character in upper case 2: var type = symbol.Substring(4, 1).ToUpper(); 3:  4: // test to see if the type is P 5: if (type == "P") 6: { 7: // ... do something with P type... 8: } Is there really any error in this code?  No, but it still struck me wrong because it is allocating two very short-lived throw-away strings, just to store and manipulate a single char: The call to Substring() generates a new string of length 1 The call to ToUpper() generates a new upper-case version of the string from Step 1. In my mind this is similar to using ToUpper() to do a case-insensitive compare: it isn’t wrong, it’s just much heavier than it needs to be (for more info on case-insensitive compares, see #2 in 5 More Little Wonders). One of my favorite books is the C++ Coding Standards: 101 Rules, Guidelines, and Best Practices by Sutter and Alexandrescu.  True, it’s about C++ standards, but there’s also some great general programming advice in there, including two rules I love:         8. Don’t Optimize Prematurely         9. Don’t Pessimize Prematurely We all know what #8 means: don’t optimize when there is no immediate need, especially at the expense of readability and maintainability.  I firmly believe this and in the axiom: it’s easier to make correct code fast than to make fast code correct.  Optimizing code to the point that it becomes difficult to maintain often gains little and often gives you little bang for the buck. But what about #9?  Well, for that they state: “All other things being equal, notably code complexity and readability, certain efficient design patterns and coding idioms should just flow naturally from your fingertips and are no harder to write then the pessimized alternatives. This is not premature optimization; it is avoiding gratuitous pessimization.” Or, if I may paraphrase: “where it doesn’t increase the code complexity and readability, prefer the more efficient option”. The example code above was one of those times I feel where we are violating a tacit C# coding idiom: avoid creating unnecessary temporary strings.  The code creates temporary strings to hold one char, which is just unnecessary.  I think the original coder thought he had to do this because ToUpper() is an instance method on string but not on char.  What he didn’t know, however, is that ToUpper() does exist on char, it’s just a static method instead (though you could write an extension method to make it look instance-ish). This leads me (in a long-winded way) to my Little Wonders for the day… Static Methods of System.Char So let’s look at some of these handy, and often overlooked, static methods on the char type: IsDigit(), IsLetter(), IsLetterOrDigit(), IsPunctuation(), IsWhiteSpace() Methods to tell you whether a char (or position in a string) belongs to a category of characters. IsLower(), IsUpper() Methods that check if a char (or position in a string) is lower or upper case ToLower(), ToUpper() Methods that convert a single char to the lower or upper equivalent. For example, if you wanted to see if a string contained any lower case characters, you could do the following: 1: if (symbol.Any(c => char.IsLower(c))) 2: { 3: // ... 4: } Which, incidentally, we could use a method group to shorten the expression to: 1: if (symbol.Any(char.IsLower)) 2: { 3: // ... 4: } Or, if you wanted to verify that all of the characters in a string are digits: 1: if (symbol.All(char.IsDigit)) 2: { 3: // ... 4: } Also, for the IsXxx() methods, there are overloads that take either a char, or a string and an index, this means that these two calls are logically identical: 1: // check given a character 2: if (char.IsUpper(symbol[0])) { ... } 3:  4: // check given a string and index 5: if (char.IsUpper(symbol, 0)) { ... } Obviously, if you just have a char, then you’d just use the first form.  But if you have a string you can use either form equally well. As a side note, care should be taken when examining all the available static methods on the System.Char type, as some seem to be redundant but actually have very different purposes.  For example, there are IsDigit() and IsNumeric() methods, which sound the same on the surface, but give you different results. IsDigit() returns true if it is a base-10 digit character (‘0’, ‘1’, … ‘9’) where IsNumeric() returns true if it’s any numeric character including the characters for ½, ¼, etc. Summary To come full circle back to our opening example, I would have preferred the code be written like this: 1: // grab 5th char and take upper case version of it 2: var type = char.ToUpper(symbol[4]); 3:  4: if (type == 'P') 5: { 6: // ... do something with P type... 7: } Not only is it just as readable (if not more so), but it performs over 3x faster on my machine:    1,000,000 iterations of char method took: 30 ms, 0.000050 ms/item.    1,000,000 iterations of string method took: 101 ms, 0.000101 ms/item. It’s not only immediately faster because we don’t allocate temporary strings, but as an added bonus there less garbage to collect later as well.  To me this qualifies as a case where we are using a common C# performance idiom (don’t create unnecessary temporary strings) to make our code better. Technorati Tags: C#,CSharp,.NET,Little Wonders,char,string

    Read the article

  • Polite busy-waiting with WRPAUSE on SPARC

    - by Dave
    Unbounded busy-waiting is an poor idea for user-space code, so we typically use spin-then-block strategies when, say, waiting for a lock to be released or some other event. If we're going to spin, even briefly, then we'd prefer to do so in a manner that minimizes performance degradation for other sibling logical processors ("strands") that share compute resources. We want to spin politely and refrain from impeding the progress and performance of other threads — ostensibly doing useful work and making progress — that run on the same core. On a SPARC T4, for instance, 8 strands will share a core, and that core has its own L1 cache and 2 pipelines. On x86 we have the PAUSE instruction, which, naively, can be thought of as a hardware "yield" operator which temporarily surrenders compute resources to threads on sibling strands. Of course this helps avoid intra-core performance interference. On the SPARC T2 our preferred busy-waiting idiom was "RD %CCR,%G0" which is a high-latency no-nop. The T4 provides a dedicated and extremely useful WRPAUSE instruction. The processor architecture manuals are the authoritative source, but briefly, WRPAUSE writes a cycle count into the the PAUSE register, which is ASR27. Barring interrupts, the processor then delays for the requested period. There's no need for the operating system to save the PAUSE register over context switches as it always resets to 0 on traps. Digressing briefly, if you use unbounded spinning then ultimately the kernel will preempt and deschedule your thread if there are other ready threads than are starving. But by using a spin-then-block strategy we can allow other ready threads to run without resorting to involuntary time-slicing, which operates on a long-ish time scale. Generally, that makes your application more responsive. In addition, by blocking voluntarily we give the operating system far more latitude regarding power management. Finally, I should note that while we have OS-level facilities like sched_yield() at our disposal, yielding almost never does what you'd want or naively expect. Returning to WRPAUSE, it's natural to ask how well it works. To help answer that question I wrote a very simple C/pthreads benchmark that launches 8 concurrent threads and binds those threads to processors 0..7. The processors are numbered geographically on the T4, so those threads will all be running on just one core. Unlike the SPARC T2, where logical CPUs 0,1,2 and 3 were assigned to the first pipeline, and CPUs 4,5,6 and 7 were assigned to the 2nd, there's no fixed mapping between CPUs and pipelines in the T4. And in some circumstances when the other 7 logical processors are idling quietly, it's possible for the remaining logical processor to leverage both pipelines. Some number T of the threads will iterate in a tight loop advancing a simple Marsaglia xor-shift pseudo-random number generator. T is a command-line argument. The main thread loops, reporting the aggregate number of PRNG steps performed collectively by those T threads in the last 10 second measurement interval. The other threads (there are 8-T of these) run in a loop busy-waiting concurrently with the T threads. We vary T between 1 and 8 threads, and report on various busy-waiting idioms. The values in the table are the aggregate number of PRNG steps completed by the set of T threads. The unit is millions of iterations per 10 seconds. For the "PRNG step" busy-waiting mode, the busy-waiting threads execute exactly the same code as the T worker threads. We can easily compute the average rate of progress for individual worker threads by dividing the aggregate score by the number of worker threads T. I should note that the PRNG steps are extremely cycle-heavy and access almost no memory, so arguably this microbenchmark is not as representative of "normal" code as it could be. And for the purposes of comparison I included a row in the table that reflects a waiting policy where the waiting threads call poll(NULL,0,1000) and block in the kernel. Obviously this isn't busy-waiting, but the data is interesting for reference. _table { border:2px black dotted; margin: auto; width: auto; } _tr { border: 2px red dashed; } _td { border: 1px green solid; } _table { border:2px black dotted; margin: auto; width: auto; } _tr { border: 2px red dashed; } td { background-color : #E0E0E0 ; text-align : right ; } th { text-align : left ; } td { background-color : #E0E0E0 ; text-align : right ; } th { text-align : left ; } Aggregate progress T = #worker threads Wait Mechanism for 8-T threadsT=1T=2T=3T=4T=5T=6T=7T=8 Park thread in poll() 32653347334833483348334833483348 no-op 415 831 124316482060249729303349 RD %ccr,%g0 "pause" 14262429269228623013316232553349 PRNG step 412 829 124616702092251029303348 WRPause(8000) 32443361333133483349334833483348 WRPause(4000) 32153308331533223347334833473348 WRPause(1000) 30853199322432513310334833483348 WRPause(500) 29173070315032223270330933483348 WRPause(250) 26942864294930773205338833483348 WRPause(100) 21552469262227902911321433303348

    Read the article

  • Extra Life 2012 - The Final Plea ... Until the Next One

    - by Chris Gardner
    I thought I'd share the email stream that my friends and family get about the event.So, here we are again. We scream closer to the event, and the goal is not met.I was approached by the ghost of feral platypii past last night. Well, approached is putting it lightly. I was mugged by the ghost of platypii past last night. He reminded me, in no uncertain terms that I have only reached the midway point of my fundraising goal. He then reminded me, in even less uncertain terms, that we are one week away from the event. There were other reminders past that, but this is a family broadcast. *shudder*Now, let us be serious for a moment. The event organizers claim a personal story helps to tug heart strings, whatever those are...I've been to Children's Hospital of Birmingham. I had to take Spawn, the Latter, there to verify she was not going to die. Instead, she's just a ticking time bomb for the next generation, but I digress.While I was there, I saw things. I saw child after child after child waiting for their appointment. I saw the most sublime displays of children's art juxtaposed with hospital sterilization that I could ever possibly imagine. I saw and heard things that only occur in the nightmares of parents, and I was only in the waiting rooms.But I will never forget the 10-ish year old girl that came in for her regularly scheduled dialysis appointment ... as if it was just another Friday afternoon. She had her school books, a little snack, a book to read for pleasure, and a DVD, in case she finished her homework a little early. You know, everything you'd need for an afternoon hooked up to a huge medical machine that going to clean out all the toxins in your blood. As she entered the secured area, she warmly greeted all the doctors and nurses with the same familiarity that I would greet the staff of my favorite coffee shop as I stopped in for my morning cup of coffee.I don't know the status of that little girl. I don't know if she's healthy or, quite frankly, alive. I don't even know her name, as I only heard it in passing for the 37 seconds our paths crossed. However, I do remember being incredibly moved and touched by her upbeat attitude about the situations, and I hope that my efforts last two Octobers got her, in some way, a little comfort.And, if she is still with us, I hope we can get her a little more.=== PREVIOUS MESSAGE FOLLOWS ===Greetings (Again),If you are receiving this updated message, then you didn't feel generous the first time. Now, I tried to be nice the first time. I tried to send a simple, unobtrusive email message to get you into the spirit. Well, much like the bell ringers that I ignore in front of the Wal-Mart, you ignored me.I probably should have seen that coming...However, unlike those poor souls, I know how to contact you. And I can find out where you live. So, so, so, you better feel lucky that I'm too lazy to terrorize you people, but cause I could do it.Remember, it's not for me, it's for those poor kids... and the feral platypii.  Because, we can make more children, but platypii are hard to come by.=== ORIGINAL MESSAGE FOLLOWS ===It's that time of year again. The time when I beg you for money for charity. See, unlike those bell ringers outside Wal-Mart, I don't do it when you have ten bazillion holiday obligations...Once again, I will be enduring a 24-hour marathon of gaming to raise money for Children Hospital in Birmingham. All the money goes straight to them, and you get to tell Uncie Samuel that you're good for that money. I'd REALLY like to break $1000 this year, as I have come REALLY close for the past 2 year to doing so.This year, the event will take place on October 20th, beginning at 8 A.M. Once again, I will try to provide some web streams, etc, if you want to point and laugh (especially if I have to result to playing Dance Central at 4 AM to stay awake for the last part.)Look at it this way, I'm going to badger you about this for the next month. You might as well donate some money so you can righteously tell me to shut the Smurf up.You can place your bid at the link below. Feel free to spread the word to anyone and everyone.I thank you. The children thank you. Several breeds of feral platypus thank you. Maybe, just maybe, doing so will help you feel the love felt by re-fried beans when lovingly hugged in a warm tortilla.Enjoy your burrito.http://www.extra-life.org/participant/cgardner

    Read the article

  • JQuery dialog momentairly displayed on page load

    - by Kevin Won
    I created a page that has a JQuery based dialog using the standard JQuery UI function. I do this with out of the box functionality of JQuery... nothing special at all. Here is my HTML for the dialog: <div id = "myDialog"> <!-- ... more html in here for the dialog --> </div> Then the JQuery called in javascript that transforms the <div> to a dialog: // pruned .js as an example of kicking up a JQuery dialog $('#myDialog').dialog({ autoOpen: false, title: 'Title here', modal: true } }); Again, plain-vanilla JQuery. So you start this wizard by clicking on a link on the parent page, and it then spawns a JQuery dialog which has a significant chunk of HTML that includes images, etc. As I continued developing this page, I started to notice that when I loaded the page in the browser that the <div> tags I was putting in that JQuery transforms into dialogs would very briefly be displayed. Then the page would act as expected. In other words, the dialog would not be hidden, it would be displayed briefly in-line in the page. Quite ugly and unprofessional looking! But after a split second, the page would render correctly and look just as I expected/wanted. Over time, as the page size grew, the time the page would remain incorrectly rendered grew. My guess is that the rendering engine of the browser is rendering the page as it is loading, then at the end it is kicking off the JQuery that will transform the <div> into a dialog. This JQuery function will then transform the simple <div> to a JQuery dialog and hide it (since I have the autoOpen property set to false). Some browsers <cough>IE</cough> display it longer than others. My large-ish dialog now causes the page to incorrectly render for about 1 second... YUCK! I came up with a resolution to this problem which works OK, but I'm wondering if someone knows of a better way.

    Read the article

  • WSIT, Maven, and wsimport -- Can They Work Together?

    - by rtperson
    Hi all, I'm working on a small-ish multi-module project in Maven. We've separated the UI from the database layer using Web Services, and thanks to the jaxws-maven-plugin, the creation of the WSDL and WS client are more or less handled for us. (The plugin is essentially a wrapper around wsgen and wsimport.) So far so good. The problem comes when I try to layer WSIT security into the picture. NetBeans allows me to generate the security metadata easily, but wsimport seems completely incapable of dealing with anything beyond a Basic-auth level of security. Here's our current, insecure way of calling wsimport during a Maven build: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>jaxws-maven-plugin</artifactId> <version>1.10</version> <executions> <execution> <goals> <goal>wsimport</goal> </goals> <configuration> <wsdlUrls> <wsdlUrl>${basedir}/../WebService/target/jaxws/wsgen/wsdl/WebService.wsdl</wsdlUrl> </wsdlUrls> <packageName>com.yourcompany.appname.ws.client</packageName> <sourceDestDir>${basedir}/src/main/java</sourceDestDir> <destDir>${basedir}/target/jaxws</destDir> </configuration> </execution> </executions> </plugin> I have tried playing around with xauthFile, xadditionalHeaders, passing javax.xml.ws.security.auth.username and password through args. I have also tried using wsimport from the command line to point to the Tomcat-generated WSDL, which has the additional security info. Nothing, however, seems to change the composition of the wsimport-generated files at all. So I guess my question here is, to get a WSIT-compliant client, am I stuck abandoning Maven and the jaxws plugin altogether? Is there a way to get a WSIT client to auto-generate? Or will I need to generate the client by hand? Let me know if you need any additional info beyond what I've written here. I'm deploying to Tomcat, although that doesn't seem to be an issue, as Maven seems happy to pull Metro into the deployed WAR file. Thanks in advance!

    Read the article

  • Binding Silverlight UserControl custom properties to its' elements

    - by ghostskunks
    Hi. I'm trying to make a simple crossword puzzle game in Silverlight 2.0. I'm working on a UserControl-ish component that represents a square in the puzzle. I'm having trouble with binding up my UserControl's properties with its' elements. I've finally (sort of) got it working (may be helpful to some - it took me a few long hours), but wanted to make it more 'elegant'. I've imagined it should have a compartment for the content and a label (in the upper right corner) that optionally contains its' number. The content control probably be a TextBox, while label control could be a TextBlock. So I created a UserControl with this basic structure (the values are hardcoded at this stage): <UserControl x:Class="XWord.Square" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" FontSize="30" Width="100" Height="100"> <Grid x:Name="LayoutRoot" Background="White"> <Grid.ColumnDefinitions> <ColumnDefinition Width="*"/> <ColumnDefinition Width="Auto"/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="*"/> </Grid.RowDefinitions> <TextBlock x:Name="Label" Grid.Row="0" Grid.Column="1" Text="7"/> <TextBox x:Name="Content" Grid.Row="1" Grid.Column="0" Text="A" BorderThickness="0" /> </Grid> </UserControl> I've also created DependencyProperties in the Square class like this: public static readonly DependencyProperty LabelTextProperty; public static readonly DependencyProperty ContentCharacterProperty; // ...(static constructor with property registration, .NET properties // omitted for brevity)... Now I'd like to figure out how to bind the Label and Content element to the two properties. I do it like this (in the code-behind file): Label.SetBinding( TextBlock.TextProperty, new Binding { Source = this, Path = new PropertyPath( "LabelText" ), Mode = BindingMode.OneWay } ); Content.SetBinding( TextBox.TextProperty, new Binding { Source = this, Path = new PropertyPath( "ContentCharacter" ), Mode = BindingMode.TwoWay } ); That would be more elegant done in XAML. Does anyone know how that's done?

    Read the article

  • Oracle Coding Standards Feature Implementation

    - by Mike Hofer
    Okay, I have reached a sort of an impasse. In my open source project, a .NET-based Oracle database browser, I've implemented a bunch of refactoring tools. So far, so good. The one feature I was really hoping to implement was a big "Global Reformat" that would make the code (scripts, functions, procedures, packages, views, etc.) standards compliant. (I've always been saddened by the lack of decent SQL refactoring tools, and wanted to do something about it.) Unfortunatey, I am discovering, much to my chagrin, that there doesn't seem to be any one widely-used or even "generally accepted" standard for PL-SQL. That kind of puts a crimp on my implementation plans. My search has been fairly exhaustive. I've found lots of conflicting documents, threads and articles and the opinions are fairly diverse. (Comma placement, of all things, seems to generate quite a bit of debate.) So I'm faced with a couple of options: Add a feature that lets the user customize the standard and then reformat the code according to that standard. —OR— Add a feature that lets the user customize the standard and simply generate a violations list like StyleCop does, leaving the SQL untouched. In my mind, the first option saves the end-users a lot of work, but runs the risk of modifying SQL in potentially unwanted ways. The second option runs the risk of generating lots of warnings and doing no work whatsoever. (It'd just be generally annoying.) In either scenario, I still have no standard to go by. What I'd need to know from you guys is kind of poll-ish, but kind of not. If you were going to use a tool of this nature, what parts of your SQL code would you want it to warn you about or fix? Again, I'm just at a loss due to a lack of a cohesive standard. And given that there isn't anything out there that's officially published by Oracle, I think this is something the community could weigh in on. Also, given the way that voting works on SO, the votes would help to establish the popularity of a given "refactoring." P.S. The engine parses SQL into an expression tree so it can robustly analyze the SQL and reformat it. There should be quite a bit that we can do to correct the format of the SQL. But I am thinking that for the first release of the thing, layout is the primary concern. Though it is worth noting that the thing already has refactorings for converting keywords to upper case, and identifiers to lower case.

    Read the article

  • What's the best way to do base36 arithmetic in perl?

    - by DVK
    What's the best way to do base36 arithmetic in Perl? To be more specific, I need to be able to do the following: Operate on positive N-digit numbers in base 36 (e.g. digits are 0-9 A-Z) N is finite, say 9 Provide basic arithmetic, at the very least the following 3: Addition (A+B) Subtraction (A-B) Whole division, e.g. floor(A/B). Strictly speaking, I don't really need a base10 conversion ability - the numbers will 100% of time be in base36. So I'm quite OK if the solution does NOT implement conversion from base36 back to base10 and vice versa. I don't much care whether the solution is brute-force "convert to base 10 and back" or converting to binary, or some more elegant approach "natively" performing baseN operations (as stated above, to/from base10 conversion is not a requirement). My only 3 considerations are: It fits the minimum specifications above It's "standard". Currently we're using and old homegrown module based on base10 conversion done by hand that is buggy and sucks. I'd much rather replace that with some commonly used CPAN solution instead of re-writing my own bicycle from scratch, but I'm perfectly capable of building it if no better standard possibility exists. It must be fast-ish (though not lightning fast). Something that takes 1 second to sum up 2 9-digit base36 numbers is worse than anything I can roll on my own :) P.S. Just to provide some context in case people decide to solve my XY problem for me in addition to answering the technical question above :) We have a fairly large tree (stored in DB as a bunch of edges), and we need to superimpose order on a subset of that tree. The tree dimentions are big both depth- and breadth- wise. The tree is VERY actively updated (inserts and deletes and branch moves). This is currently done by having a second table with 3 columns: parent_vertex, child_vertex, local_order, where local_order is an 9-character string built of A-Z0-9 (e.g. base 36 number). Additional considerations: It is required that the local order is unique per child (and obviously unique per parent), Any complete re-ordering of a parent is somewhat expensive, and thus the implementation is to try and assign - for a parent with X children - the orders which are somewhat evenly distributed between 0 and 36**10-1, so that almost no tree inserts result in a full re-ordering.

    Read the article

  • Wordpress: how to call a plugin function with an ajax call?

    - by Bee
    I'm writing a Wordpress MU plugin, it includes a link with each post and I want to use ajax to call one of the plugin functions when the user clicks on this link, and then dynamically update the link-text with output from that function. I'm stuck with the ajax query. I've got this complicated, clearly hack-ish, way to do it, but it is not quite working. What is the 'correct' or 'wordpress' way to include ajax functionality in a plugin? (My current hack code is below. When I click the generate link I don't get the same output I get in the wp page as when I go directly to sample-ajax.php in my browser.) I've got my code[1] set up as follows: mu-plugins/sample.php: <?php /* Plugin Name: Sample Plugin */ if (!class_exists("SamplePlugin")) { class SamplePlugin { function SamplePlugin() {} function addHeaderCode() { echo '<link type="text/css" rel="stylesheet" href="'.get_bloginfo('wpurl'). '/wp-content/mu-plugins/sample/sample.css" />\n'; wp_enqueue_script('sample-ajax', get_bloginfo('wpurl') . '/wp-content/mu-plugins/sample/sample-ajax.js.php', array('jquery'), '1.0'); } // adds the link to post content. function addLink($content = '') { $content .= "<span class='foobar clicked'><a href='#'>click</a></span>"; return $content; } function doAjax() { // echo "<a href='#'>AJAX!</a>"; } } } if (class_exists("SamplePlugin")) { $sample_plugin = new SamplePlugin(); } if (isset($sample_plugin)) { add_action('wp_head',array(&$sample_plugin,'addHeaderCode'),1); add_filter('the_content', array(&$sample_plugin, 'addLink')); } mu-plugins/sample/sample-ajax.js.php: <?php if (!function_exists('add_action')) { require_once("../../../wp-config.php"); } ?> jQuery(document).ready(function(){ jQuery(".foobar").bind("click", function() { var aref = this; jQuery(this).toggleClass('clicked'); jQuery.ajax({ url: "http://mysite/wp-content/mu-plugins/sample/sample-ajax.php", success: function(value) { jQuery(aref).html(value); } }); }); }); mu-plugins/sample/sample-ajax.php: <?php if (!function_exists('add_action')) { require_once("../../../wp-config.php"); } if (isset($sample_plugin)) { $sample_plugin->doAjax(); } else { echo "unset"; } ?> [1] Note: The following tutorial got me this far, but I'm stumped at this point. http://www.devlounge.net/articles/using-ajax-with-your-wordpress-plugin

    Read the article

  • Looking for an Open Source Project in need of help

    - by hvidgaard
    Hi StackOverflow! I'm a CS student on well on my way to graduate. I have had a difficult time of finding relevant student jobs (they seems to be taken merely hours after the notice gets on the board) , so instead I'm looking for an open source project in need of help. I'm aware that I should choose one that I use, but I'm not aware of any OS-project that I use that needs help. That's why I'm asking you. I don't have any deep experience, but I here are some of my biggest projects so far: BitTorrent-ish client in Python (a subset of BitTorrent) HTTP 1.1 webserver in Java Compiler from a subset of Java to run on JRE Flash-framework project to model an iPad look and feel (not to run actual iPad programs) complete with an API for programs. Complete MySQL database for a booking system, with departure and arrival times, so you could only book valid tickets (with a Java frontend). I know, Java and languages like AS3 and C# feels natural per se, Python, and have done a fair bit of hacking around in C, but I don't feel very comfortable with it. Mostly I'm afraid to make a fuckup because I have such a high degree of control. I would like to think I'm well aware of good software design practices, but in reality what I do is ask myself "would I like to use/maintain this?", and I love to refactor my code because I see optimizations. I love algorithms and to make them run in the best possible time. I don't have any preferred domain to work in, but I wouldn't mind it to be graphics or math heavy. Ideally I'm looking for a project in C++ to learn the in's and out's of it, but I'm well aware that I don't know that language very well. I would like to have a mentor-like figure until I'm confident enough to stand on my own, not one to review all my code (I'm sure someone will to start with anyway), but to ask questions about the project and language in question. I do have a wife and two children, so don't expect me to put in 10+ hours every week. In return I can work on my own, I strive to program modular and maintainable code. Know how to read an API, use Google, StackOverflow and online resources in general. If you have any questions, shoot. I'm looking forward to your suggestions.

    Read the article

  • web service filling gridview awfully slow, as is paging/sorting

    - by nat
    Hi I am making a page which calls a web service to fill a gridview this is returning alot of data, and is horribly slow. i ran the svcutil.exe on the wsdl page and it generated me the class and config so i have a load of strongly typed objects coming back from each request to the many service functions. i am then using LINQ to loop around the objects grabbing the necessary information as i go, but for each row in the grid i need to loop around an object, and grab another list of objects (from the same request) and loop around each of them.. 1 to many parent object child one.. all of this then gets dropped into a custom datatable a row at a time.. hope that makes sense.... im not sure there is any way to speed up the initial load. but surely i should be able to page/sort alot faster than it is doing. as at the moment, it appears to be taking as long to page/sort as it is to load initially. i thought if when i first loaded i put the datasource of the grid in the session, that i could whip it out of the session to deal with paging/sorting and the like. basically it is doing the below protected void Page_Load(object sender, EventArgs e) { //init the datatable //grab the filter vars (if there are any) WebServiceObj WS = WSClient.Method(args); //fill the datatable (around and around we go) foreach (ParentObject po in WS.ReturnedObj) { var COs = from ChildObject c in WS.AnotherReturnedObj where c.whatever.equals(...) ...etc foreach(ChildObject c in COs){ myDataTable.Rows.Add(tlo.this, tlo.that, c.thisthing, c.thatthing, etc......); } } grdListing.DataSource = myDataTable; Session["dt"] = myDataTable; grdListing.DataBind(); } protected void Listing_PageIndexChanging(object sender, GridViewPageEventArgs e) { grdListing.PageIndex = e.NewPageIndex; grdListing.DataSource = Session["dt"] as DataTable; grdListing.DataBind(); } protected void Listing_Sorting(object sender, GridViewSortEventArgs e) { DataTable dt = Session["dt"] as DataTable; DataView dv = new DataView(dt); string sortDirection = " ASC"; if (e.SortDirection == SortDirection.Descending) sortDirection = " DESC"; dv.Sort = e.SortExpression + sortDirection; grdListing.DataSource = dv.ToTable(); grdListing.DataBind(); } am i doing this totally wrongly? or is the slowness just coming from the amount of data being bound in/return from the Web Service.. there are maybe 15 columns(ish) and a whole load of rows.. with more being added to the data the webservice is querying from all the time any suggestions / tips happily received thanks

    Read the article

  • What libraries provide cross-platform 3D and P2P support?

    - by uckelman
    I'm trying to find a constellation of libraries which, taken together, meet the following requirements: Smooth scaling, rotation, panning (in two dimensions). I'll have a large bitmap (or SVG, in some cases), maybe up to 10000x10000 pixels, which serves as map, with some middling number of small bitmaps (or, again, possibly SVG) that can be dragged around over it. I need to be able to zoom, rotate, and pan this scene; however, the view will always be normal to (i.e., looking head-on at) the large bitmap, so I'm not really using the depth dimension. Peer-to-peer. I'd like for multiple users to be able to connect in order to share one of the scenes mentioned above, preferably peer-to-peer, without much configuration by the user. I'm intending to have a server running for cases where users are unable to connect P2P; I'd like to have the failover happen automatically, or possibly have some way of promoting clients who are capable to be servers themselves. Synchronization. Once a user has started dragging one of the small bitmaps (a piece), no other user should be able to drag that piece until the drag stops. I haven't thought of exactly how to do this---there might be a simple solution, or this kind of synchronization might be something that a library provides. Cross(ish)-platform. I need to be able to run on Linux, Windows, and Mac OS. It would be nice to also be able to run on tablets. Having mostly the same code for all platforms is a plus, but not absolutely necessary. (L)GPL compatible. I'm planning to release under the LGPL or GPL, preferably the latter, so I need libraries which have compatible licenses. I'm not set on any particular language, I'd like to use the library or libraries which make the work easiest, though my preference is to work in at most two languages for the project. (The Model could potentially be in one language and the View in another, so they could talk to each other via some protocol I define, if that would get me a better selection of libraries to use.) Can anyone offer suggestions for what to use?

    Read the article

  • Issues querying Access '07 database in C#

    - by Kye
    I'm doing a .NET unit as part of my studies. I've only just started, with a lecturer that as kinda failed to give me the most solid foundation with .NET, so excuse the noobishness. I'm making a pretty simple and generic database-driven application. I'm using C# and I'm accessing a Microsoft Access 2007 database. I've put the database-ish stuff in its own class with the methods just spitting out OleDbDataAdapters that I use for committing. I feed any methods which preform a query a DataSet object from the main program, which is where I'm keeping the data (multiple tables in the db). I've made a very generic private method that I use to perform SQL SELECT queries and have some public methods wrapping that method to get products, orders.etc (it's a generic retail database). The generic method uses a separate Connect method to actually make the connection, and it is as follows: private static OleDbConnection Connect() { OleDbConnection conn = new OleDbConnection( @"Provider=Microsoft.ACE.OLEDB.12.0; Data Source=C:\Temp\db.accdb"); return conn; } The generic method is as follows: private static OleDbDataAdapter GenericSelectQuery( DataSet ds, string namedTable, String selectString) { OleDbCommand oleCommand = new OleDbCommand(); OleDbConnection conn = Connect(); oleCommand.CommandText = selectString; oleCommand.Connection = conn; oleCommand.CommandType = CommandType.Text; OleDbDataAdapter adapter = new OleDbDataAdapter(); adapter.SelectCommand = oleCommand; adapter.MissingSchemaAction = MissingSchemaAction.AddWithKey; adapter.Fill(ds, namedTable); return adapter; } The wrapper methods just pass along the DataSet that they received from the main program, the namedtable string is the name of the table in the dataset, and you pass in the query you wish to make. It doesn't matter which query I give it (even something simple like SELECT * FROM TableName) I still get thrown an OleDbException, stating that there was en error with the FROM clause of the query. I've just resorted to building the queries with Access, but there's still no use. Obviously there's something wrong with my code, which wouldn't actually surprise me. Here are some wrapper methods I'm using. public static OleDbDataAdapter GetOrderLines(DataSet ds) { OleDbDataAdapter adapter = GenericSelectQuery( ds, "orderlines", "SELECT OrderLine.* FROM OrderLine;"); return adapter; } They all look the same, it's just the SQL that changes.

    Read the article

  • How to use an adjacency matrix to determine which rows to 'pass' to a function in r?

    - by dubhousing
    New to R, and I have a long-ish question: I have a shapefile/map, and I'm aiming to calculate a certain index for every polygon in that map, based on attributes of that polygon and each polygon that neighbors it. I have an adjacency matrix -- which I think is the same as a "1st-order queen contiguity weights matrix", although I'm not sure -- that describes which polygons border which other polygons, e.g., POLYID A B C D E A 0 0 1 0 1 B 0 0 1 0 0 C 1 1 0 1 0 D 0 0 1 0 1 E 1 0 0 1 0 The above indicates, for instance, that polygons 'C' and 'E' adjoin polygon 'A'; polygon 'B' adjoins only polygon 'C', etc. The attribute table I have has one polygon per row: POLYID TOT L10K 10_15K 15_20K ... A 500 24 30 77 ... Where TOT, L10K, etc. are the variables I use to calculate an index. There are 525 polygons/rows in my data, so I'd like to use the adjacency matrix to determine which rows' attributes to incorporate into the calculation of the index of interest. For now, I can calculate the index when I subset the rows that correspond to one 'bundle' of neighboring polygons, and then use a loop (if it's of interest, I'm calculating the Centile Gap Index, a measure of local income segregation). E.g., subsetting the 'neighborhood' of the Detroit City Schools: Detroit <- UNSD00[c(142,150,164,221,226,236,295,327,157,177,178,364,233,373,418,424,449,451,487),] Then record the marginal column proportions and a running total: catprops <- vector() for(i in 4:19) { catprops[(i-3)]<-sum(Detroit[,i])/sum(Detroit[,3]) } catprops <- as.data.frame(catprops) catprops[,2]<-cumsum(catprops[,1]) Columns 4:19 are the necessary ones in the attribute table. Then I use the following code to calculate the index -- note that the loop has "i in 1:19" because the Detroit subset has 19 polygons. cgidistsum <- 0 for(i in 1:19) { pranks <- vector() for(j in 4:19) { if (Detroit[i,j]==0) pranks <- append(pranks,0) else if (j == 4) pranks <- append(pranks,seq(0,catprops[1,2],by=catprops[1,2]/Detroit[i,j])) else pranks <- append(pranks,seq(catprops[j-4,2],catprops[j-3,2],by=catprops[j-3,1]/Detroit[i,j])) } distpranks <- vector() distpranks<-abs(pranks-median(pranks)) cgidistsum <- cgidistsum + sum(distpranks) } cgi <- (.25-(cgidistsum/sum(Detroit[,3])))/.25 My apologies if I've provided more information than is necessary. I would really like to exploit the adjacency matrix in order to calculate the CGI for each 'bundle' of these rows. If you happen to know how I could started with this, that would be great. and my apologies for any novice mistakes, I'm new to R!

    Read the article

  • SUSE EC2 Problem - zypper - Permission denied

    - by phuu
    I'm trying to use zypper to install gcc on my Amazon EC2 instance running SUSE.When I try:zypper in gcc I get: Retrieving repository 'SLE11-SDK-SP1' metadata [] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/install/SLE11-SDK-SP1/sle-11-i586/media.1/media' denied. Abort, retry, ignore? [a/r/i/?] (a): i Retrieving repository 'SLE11-SDK-SP1' metadata [error] Repository 'SLE11-SDK-SP1' is invalid. Can't provide /media.1/media : User-requested skipping of a file Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLE11-SDK-SP1' because of the above error. Retrieving repository 'SLE11-SDK-SP1-Updates' metadata [|] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLE11-SDK-SP1-Updates/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): i Retrieving repository 'SLE11-SDK-SP1-Updates' metadata [error] Repository 'SLE11-SDK-SP1-Updates' is invalid. Can't provide /repodata/repomd.xml : User-requested skipping of a file Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLE11-SDK-SP1-Updates' because of the above error. Retrieving repository 'SLES11-Extras' metadata [/] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-Extras/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): r Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-Extras/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): zypper in gcc Invalid answer 'zypper in gcc'. [a/r/i/?] (a): a Retrieving repository 'SLES11-Extras' metadata [error] Repository 'SLES11-Extras' is invalid. Can't provide /repodata/repomd.xml : Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLES11-Extras' because of the above error. Retrieving repository 'SLES11-SP1' metadata [-] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/install/SLES11-SP1/sle-11-i586/media.1/media' denied. Abort, retry, ignore? [a/r/i/?] (a): a Retrieving repository 'SLES11-SP1' metadata [error] Repository 'SLES11-SP1' is invalid. Can't provide /media.1/media : Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLES11-SP1' because of the above error. Retrieving repository 'SLES11-SP1-Updates' metadata [] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-SP1-Updates/sle-11-i586/repodata/repomd.xml' denied. I've search for the problem and this thread came up, but offered no solutions.I've triedsces-activate. Am I doing something wrong? I should say I'm very new to this, and I admit I don't really know what I'm doing, but I'm trying to learn about setting up and running a server and so I thought I'd throw myself in at the deep(ish) end. Thanks for reading.

    Read the article

  • SUSE EC2 Problem - zypper - Permission denied

    - by phuu
    Hi. I'm trying to use zypper to install gcc on my Amazon EC2 instance running SUSE.When I try:zypper in gcc I get: Retrieving repository 'SLE11-SDK-SP1' metadata [] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/install/SLE11-SDK-SP1/sle-11-i586/media.1/media' denied. Abort, retry, ignore? [a/r/i/?] (a): i Retrieving repository 'SLE11-SDK-SP1' metadata [error] Repository 'SLE11-SDK-SP1' is invalid. Can't provide /media.1/media : User-requested skipping of a file Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLE11-SDK-SP1' because of the above error. Retrieving repository 'SLE11-SDK-SP1-Updates' metadata [|] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLE11-SDK-SP1-Updates/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): i Retrieving repository 'SLE11-SDK-SP1-Updates' metadata [error] Repository 'SLE11-SDK-SP1-Updates' is invalid. Can't provide /repodata/repomd.xml : User-requested skipping of a file Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLE11-SDK-SP1-Updates' because of the above error. Retrieving repository 'SLES11-Extras' metadata [/] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-Extras/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): r Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-Extras/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): zypper in gcc Invalid answer 'zypper in gcc'. [a/r/i/?] (a): a Retrieving repository 'SLES11-Extras' metadata [error] Repository 'SLES11-Extras' is invalid. Can't provide /repodata/repomd.xml : Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLES11-Extras' because of the above error. Retrieving repository 'SLES11-SP1' metadata [-] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/install/SLES11-SP1/sle-11-i586/media.1/media' denied. Abort, retry, ignore? [a/r/i/?] (a): a Retrieving repository 'SLES11-SP1' metadata [error] Repository 'SLES11-SP1' is invalid. Can't provide /media.1/media : Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLES11-SP1' because of the above error. Retrieving repository 'SLES11-SP1-Updates' metadata [] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-SP1-Updates/sle-11-i586/repodata/repomd.xml' denied. I've search for the problem and this thread came up, but offered no solutions.I've triedsces-activate. Am I doing something wrong? I should say I'm very new to this, and I admit I don't really know what I'm doing, but I'm trying to learn about setting up and running a server and so I thought I'd throw myself in at the deep(ish) end. Thanks for reading.

    Read the article

< Previous Page | 10 11 12 13 14 15 16  | Next Page >