Search Results

Search found 19969 results on 799 pages for 'nate bit'.

Page 560/799 | < Previous Page | 556 557 558 559 560 561 562 563 564 565 566 567  | Next Page >

  • What kind of position matches my skills, experience and interests? [closed]

    - by Ryan
    I work in a large firm and my current job covers a variety of different duties. Due to several factors I am seriously considering finding a new job (hours, pay-cut, limited career growth). I have worked for the company nearly 4 years and almost 2 years ago I transitioned into more of a business analyst role (previously I was working in a client facing role for our audit group). In this role I have overseen all aspects of the development of a large scale re-platforming of our firm's key tool in analyzing investment portfolios. I gathered requirements, wrote specs, designed the UI and functionality, worked closely with developers (onshore and offshore) to see to it the implementation was correct, managed schedules and was the lead tester. This is a large scale system used by thousands of people around the world. I've also written Excel macros, reports in SQL, given trainings, written technical manuals, interfaced with senior managers and partners, etc. I've been on a couple interviews sporadically, most of which were for positions aimed at higher management consulting type positions, dealing with strategy, overall process management, project management, etc. What really interests me is the technical stuff and overseeing a project from beginning to end (although I would rather not have to do so many of the tasks on my own). I genuinely like a lot of what I do, but the company culture and attitude towards overworking people combined with my recent pay-cut (my overtime was cut due to a promotion to a higher level) has lead me to want to seek work elsewhere. The problem is - what type of work could I realistically do? I feel like traditional business analysis is too much business and not enough tech stuff, and I've really taken a shine lately to beefing up my programming abilities and creating small programs to automate things around work. I also feel that because my actual years of experience as a business analyst (figure 1.5 years realistically) puts me at a junior level doing a lot of grunt requirements gathering, when the work that I have been doing with my current company is more in line with what a Program Manager does (depending on your definition I guess). So in reality, when I'm job hunting I get a bit perplexed because I feel like the traditional BA stuff wouldn't really suit me, and even if it did it's usually something along the lines of 5-10 years experience for the type of work that is similar to what I've done (and I've also found most BA jobs to be contract only which at the moment I'm not too keen on). Program Manager is something that interests me, but again I feel like the experience is lacking because that's a much more senior position. Am I in some kind of career no-man's land? Any idea what would best suit me given my experience and abilities, as well as my interests? I plan to keep learning programming on the side, but don't expect to get a job being a straight programmer given my relative inexperience with programming.

    Read the article

  • At $20/month Windows Azure host my website with 99.97% uptime

    - by Gopinath
    Couple of years ago a reliable and decent performing Windows hosting was not affordable to many enthusiastic developers who want to try a startup idea or build a hobby site. I tried to start an ASP.NET website few years ago to provide services like – Mobile Tracing, Vehicle Tracing. But due to high cost of Windows hosting I developed those services using PHP (not an easy task for .NET developer) and hosted on them Linux servers.  But with recent evolution of Windows Azure, hosting ASP.NET websites on highly reliable servers is affordable. Today anyone can host a high responsive and available ASP.NET website for just $20/month using Windows Azure. My website coziie.com is running on Windows Azure and serves close to quarter millions visitors a month with 99.97% of uptime and most of the page load times are less than 3 seconds. All I spend to run this website is just around $20, if you translate it to India rupees its roughly Rs.1000. The web sever of coziie.com is powered by a single Extra Small Web role instance and the backend is powered by a SQL Azure instance. Azure is quite impressive to provide 99.97% of uptime. Response times during peak are around 3 seconds and on nomarl loads it is around 1.5 seconds. Here is the report of uptime provided by Royal Pingdom over last one year For just $20/month Windows Azure takes care of the following apart from hosting Patches up Windows OS to the latest version Upgrades ASP.NET to the latest version – coziie.com is running on ASP.NET MVC 3 and soon I’ll upgrade it to ASP.NET MVC 4 Hosts data on latest and best version Sql Server database SQL Azure maintains 3 copies of database and automatically recovers in case of server failures and disasters. I never worry about database backups/restore. Provides staging environment for deploying applications for testing purpose and move them to production – I upgrade  twice a month on average With Windows Azure I no longer focus on server maintenance or data backups. They are taken up by Microsoft team and I just focus on building my website. Wish there is a low cost Linux version of Windows Azure so that I can stop worrying about server maintenance of this blog!! If you are looking for a Windows hosting, look no further than Windows Azure. If you find $20/month is a bit expensive to start with you may explore Azure Website (sort of shared hosted environment) which is free to start with and as your traffic grows you can move to paid hosting.

    Read the article

  • Is it okay to have people with multiple roles in a Scrum team?

    - by Wayne M
    I'm evaluating some Agile-style methodologies for possible introduction to my team. With Scrum, is it allowable to have the same person perform multiple roles? We have a small team of four developers and a web designer; we don't really have a lead (I fulfill this role), QA testers or business analysts, and all of our development tasks come from the CIO. Automated testing is seen as a total waste of time, and everything focuses on speed and not quality. What will happen is the CIO will come up with a development task (whether a feature or a bug) and give it to a developer (not to the whole team, to an individual, often in private or out of the blue) who is then expected to get it completed. The CIO doesn't gather requirements beyond the initial idea (and this has bitten us before as we'll implement something only to find out that none of the end users can use the feature, because they weren't consulted or even informed about it before we developed it, and in a panic we'll be told to revert the change) but requires say in/approval of everything that we do. First things first, is a Scrum style something to consider to introduce some standards and practices? From reading, Scrum seems to rely on a bit more trust and communication and focuses more on project management than on development, which is something we are completely devoid of as we don't have any semblance of project management at present. Second, if it can work is it unreasonable for someone, let's say myself, to act as both ScrumMaster and a developer? Or for a developer to also be the Product Owner (although chances are this will be the CIO, who isn't a developer)? I realize the Scrum Master and the Product Owner should be different people but at the same time I don't think we have anyone who has the qualities of a Product Owner (chances are it would turn into a "I need all these stories, I don't care how but get it done" type of deal and/or any freeze would be unfrozen on a whim). It seems to me that I might need to pick and choose pieces of Scrum/XP/Lean to compensate for how things are done currently, as it's highly unlikely that the mentality can be changed; for instance Pair Programming would never fly (seen as a waste, you get half the tasks done if you need two people for everything), TDD would be a hard sell, but short cycles would be welcomed.

    Read the article

  • Is SOAP Http POST more complicated than I thought

    - by Pete Petersen
    I'm currently writing a bit of code to send some xml data to a web service via HTTP POST. I thought this would be really simple and have written the following example code (C#) Console.WriteLine("Press enter to send data..."); while (Console.ReadLine() != "q") { HttpWebRequest httpWReq = (HttpWebRequest)WebRequest.Create(@"http://localhost:8888/"); Foo fooItem = new Foo { Member1 = "05", Member2 = "74455604", Member3 = "15101051", Member4 = 1, Member5 = "fsf", Member6 = 6.52, }; ASCIIEncoding encoding = new ASCIIEncoding(); string postData = fooItem.ToXml(); byte[] data = encoding.GetBytes(postData); httpWReq.Method = "POST"; httpWReq.ContentType = "application/xml"; httpWReq.ContentLength = data.Length; using (Stream stream = httpWReq.GetRequestStream()) { stream.Write(data, 0, data.Length); } HttpWebResponse response = (HttpWebResponse)httpWReq.GetResponse(); string responseString = new StreamReader(response.GetResponseStream()).ReadToEnd(); Console.WriteLine("Received " + responseString); Console.WriteLine("Press enter to send data..."); } This is all I thought would be necessary, however I have now been given the details for the web service. This included some information which is unfarmiliar to me and I'm unsure whether I need to include it. The information I was sent was <url>http://sometext/soap/rpc</url> <namespace>http://sometext/a.services</namespace> <method>receiveInfo</method> <parm-id>xmldata</parm-id> (Input data) (Actual XML data as string) <parm-id>status</parm-id> (Output data) <userid>user</userid> <password>pass</password> <secure>false</secure> I guess this means I need to include a username and password somehow, but I'm not sure what the namespace or method fields are used for. Could anyone give me a hint? Sorry I've never used webservices before.

    Read the article

  • Am I deluding myself? Business analyst transition to programmer

    - by Ryan
    Current job: Working as the lead business analyst for a Big 4 firm, leading a team of developers and testers working on a large scale re-platforming project (4 onshore dev, 4 offshore devs, several onshore/offshore testers). Also work in a similar capacity on other smaller scale projects. Extent of my role: Gathering/writing out requirements, creating functional specifications, designing the UI (basically mapping out all front-end aspects of the system), working closely with devs to communicate/clarify requirements and come up with solutions when we hit roadblocks, writing test cases (and doing much of the testing), working with senior management and key stakeholders, managing beta testers, creating user guides and leading training sessions, providing key technical support. I also write quite a few macros in Excel using VBA (several of my macros are now used across the entire firm, so there are maybe around 1000 people using them) and use SQL on a daily basis, both on the SQL compact files the program relies on, our SQL Server data and any Access databases I create. The developers feel that I am quite good in this role because I understand a lot about programming, inherent system limitations, structure of the databases, etc so it's easier for me to communicate ideas and come up with suggestions when we face problems. What really interests me is developing software. I do a fair amount of programming in VBA and have been wanting to learn C# for awhile (the dev team uses C# - I review code occasionally for my own sake but have not had any practical experience using it). I'm interested in not just the business process but also the technical side of things, so the traditional BA role doesn't really whet my appetite for the kind of stuff I want to do. Right now I have a few small projects that managers have given me and I'm finding new ways to do them (like building custom Access applications), so there's a bit here and there to keep me interested. My question is this: what I would like to do is create custom Excel or Access applications for small businesses as a freelance business (working as a one-man shop; maybe having an occasional contractor depending on a project's complexity). This would obviously start out as a part-time venture while I have a day job, but eventually become a full-time job. Am I deluding myself to thinking I can go from BA/part-time VBA programmer to making a full-time go of a freelance business (where I would be starting out just writing custom Excel/Access apps in VBA)? Or is this type of thing not usually attempted until someone gains years of full-time programming experience? And is there even a market for these types of applications amongst small businesses (and maybe medium-sized) businesses?

    Read the article

  • How to troubleshoot ethernet port on laptop

    - by Psallas Vassilios
    I have a problem with my wired connection. To be more specific, my laptop doesn't seems to recognize that I have plugged in an ethernet cable. I tried to download new drivers for my ethernet card, but I couldn't find any solutions. Maybe because I am new to Linux, so I'm not familiar with running commands in the terminal. OK I have typed the command and here are the results: 00:04.0 Ethernet controller [0200]: Silicon Integrated Systems [SiS] 191 Gigabit Ethernet Adapter [1039:0191] (rev 02) For the second reply I don't know if the following is what you asked me: ?Memory: 3.9 GiB ?Processor: Intel Core 2 Duo CPU P8800 @ 2.66GHz × 2 ?OS type: 32-bit My Ethernet connection had some problem on Windows too. I have changed recently my internet provider, and since then my ethernet cable is not recognized by the laptop. At that time I was still on Windows. I thought that with Ubuntu the problem would be solved, but unfortunately the problem still persists. If someone can help me to solve my problem I'll be thankful. Here are the results of the three first commands you told me to run: lsmod | grep sis190 sis190 22570 0 sudo modprobe sis190 ifconfig eth0 Link encap:Ethernet HWaddr 00:90:f5:90:81:7e UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:185 errors:0 dropped:0 overruns:0 frame:0 TX packets:185 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:22672 (22.6 KB) TX bytes:22672 (22.6 KB) wlan0 Link encap:Ethernet HWaddr 00:25:d3:2c:3a:ae inet addr:192.168.1.72 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::225:d3ff:fe2c:3aae/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:260 errors:0 dropped:0 overruns:0 frame:0 TX packets:363 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:71992 (71.9 KB) TX bytes:52000 (52.0 KB) and the results of running the last two commands: dmesg | grep -e eth -e sis190 [ 0.816667] sis190: sis190 Gigabit Ethernet driver 1.4 loaded [ 0.816728] sis190 0000:00:04.0: setting latency timer to 64 [ 0.816751] sis190: 0000:00:04.0: Read MAC address from EEPROM [ 0.904032] sis190: 0000:00:04.0: Realtek PHY RTL8201 transceiver at address [ 1.416030] sis190: 0000:00:04.0: Using transceiver at address 1 as default [ 1.448235] sis190 0000:00:04.0: eth0: 0000:00:04.0: SiS 191 PCI Gigabit Ethernet adapter at f8410000 (IRQ: 19), 00:90:f5:90:81:7e [ 1.448238] sis190 0000:00:04.0: eth0: GMII mode. [ 1.448243] sis190 0000:00:04.0: eth0: Enabling Auto-negotiation [ 11.560907] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 16.372019] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 16.372265] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 26.424038] sis190 0000:00:04.0: eth0: auto-negotiating... nm-tool NetworkManager Tool State: connected (global) - Device: eth0 ----------------------------------------------------------------- Type: Wired Driver: sis190 State: unavailable Default: no HW Address: 00:90:F5:90:81:7E Capabilities: Carrier Detect: yes Wired Properties Carrier: off

    Read the article

  • 2D camera perspective projection from 3D coordinates -- HOW?

    - by Jack
    I am developing a camera for a 2D game with a top-down view that has depth. It's almost a 3D camera. Basically, every object has a Z even though it is in 2D, and similarly to parallax layers their position, scale and rotation speed vary based on their Z. I guess this would be a perspective projection. But I am having trouble converting the objects' 3D coordinates into the 2D space of the screen so that everything has correct perspective and scale. I never learned matrices though I did dig the topic a bit today. I tried without using matrices thanks to this article but every attempt gave awkward results. I'm using ActionScript 3 and Flash 11+ (Starling), where the screen coordinates work like this: Left-handed coordinates system illustration I can explain further what I did if you want to help me sort out what's wrong, or you can directly tell me how you would do it properly. In case you prefer the former, read on. These are images showing the formulas I used: upload.wikimedia.org/math/1/c/8/1c89722619b756d05adb4ea38ee6f62b.png upload.wikimedia.org/math/d/4/0/d4069770c68cb8f1aa4b5cfc57e81bc3.png (Sorry new users can't post images, but both are from the wikipedia article linked above, section "Perspective projection". That's where you'll find what all variables mean, too) The long formula is greatly simplified because I believe a normal top-down 2D camera has no X/Y/Z rotation values (correct ?). Then it becomes d = a - c. Still, I can't get it to work. Maybe you could explain what numbers I should put in a(xyz), c(xyz), theta(xyz), and particularly, e(xyz) ? I don't quite get how e is different than c in my case. c.z is also an issue to me. If the Z of the camera's target object is 0, should the camera's Z be something like -600 ? ( = focal length of 600) Whatever I do, it's wrong. I only got it to work when I used arbitrary calculations that "looked" right, like most cameras with parallax layers seem to do, but that's fake! ;) If I want objects to travel between Z layers I might as well do it right. :) Thanks a lot for your help!

    Read the article

  • Training v. Teaching

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2014/05/28/training-v.-teaching.aspxAs some of you may know, I recently accepted a position to teach an undergraduate course at my alma mater. Yesterday, I had my first day in an academic classroom. I immediately noticed a difference with the interactions between the students. They don't act like students in a professional training or conference talk. I wanted to use this opportunity to enumerate some of those differences. The immediate thing I noticed was the lack of open environment. This is not to say the class was hostile towards me. I am used to entering the room, bantering with audience, loosening everyone a bit, and flowing into the discussion. A purely academic audience does not banter. At least, they do not banter on day one. I think I can attribute this to two factors. This first is a greater perception of authority. In a training or conference environment, I am an equal with the audience. This is true even if I am being a subject matter expert. We're all professionals. We're all there to learn from each other, share our stories, and enjoy the journey. In the academic classroom, there was a distinct class difference. I had forgotten about this distinction; I had the professional familiarity with the staff by the time I completed my masters. This leads to the other distinction. These was an expectation of performance. At conference and professional training, there is generally no (immediate) grading. This may be a preparation for a certification exam, but I'm not the one responsible for delivering the exam. This was not the case in the academic classroom. These students are battling for points, and I am the sole arbiter. These students are less likely to let the material wash over them, applying the material to their past experiences. They were down taking notes. I don't want to leave the impression that there was no interact in the classroom. I spent a good deal of time doing problems with the class on the whiteboard. I tried to get the class to help me work out the steps. This opened up a few of them. After every conference or training class, I always get a few people that will email me afterward to continue the conversation. I am very curious to see if anybody comes to my office hours tomorrow. However, that is a curiosity that will have to wait until tomorrow.

    Read the article

  • How are you coping with Ubuntu's Unity app launcher? (It auto-hides, can't minimize apps)

    - by Bad Learner
    [Firstly, let me tell you that this cannot be subjective in anyway, as I think at least Ubuntu beginners will have these questions boggling in their mind; and yes, this is a question that has a definite answer - - so, I am completely within the rules.] Okay, coming to the point, I see that Ubuntu uses Unity since v10.xx (netbook edition?) and carried the same to v11.04 & v11.10. As someone who's stuck to Windows for all these years, it's somewhat difficult to cope with Ubuntu's Unity, for the following reasons: [1] The Unity app launcher (to the screen's left) auto-hides when a window is maximized. [2]- And once launched, apps cannot be minimized by clicking the app's icon in the launcher. I have to go to the top-left of the screen and click the "_" button. I do know I can fix these issues by installing some configuration tool. But the thing is, if that's how it's meant to work, Canonical/Ubuntu would have designed it that way. But they didn't. Why? w.r.t above points [1], [2]: [1] EDITED: So, does it mean, it's good to work without maximizing the windows? Because if I maximize the window, the app launcher hides. And I need to hover the mouse to the left of the screen, wait a bit (even if it's a sec or even less, I can still feel the lag), and then click on the next app icon in the launcher to switch to it. I do know, I can use Alt+TAB to switch, but I am not sure which window comes next. This, I feel, isn't productive. Also, this makes me feel, Ubuntu is designed for large screens (it's nice on my 1920x1080p screen), because I can have two windows side-by-side or something like that on a large screen. This is not possible on smaller screens. [2]- Being able to minimize an application's window by clicking on its icon in the launcher (just like it works on Windows & probably elsewhere) would have been great, rather than having to go to the top-left and clicking the _ (minimize) button which brings up the app launcher itself (from hiding) most of the time. It's too tiring to have these small issues in the UI. I really would like to know how you are coping with these issues the way they are?

    Read the article

  • It's possible to fulfill the social necessity of a human being through a social game in 3D like IMVU?

    - by Totty
    (I'm not advertising nor promoting this game, as it's just an example of my experience and I would like to have your opinion about the matter if possible) I've been started researching "things" about games and I've decided to begin to play IMVU as a friend of mine said it's cool. At first it seemed just another 3d social game, not so cool.. But I've "tried to like" and after 1 day I can say I'm addicted to it! Yes; I will explain better: About the game: You can go in chat-rooms, move to positions. Some positions are like sitting in a sofa, floor, dancing alone or with a partner, kissing and more in this way. In the free version of the game there is no nudity. You can even listen to music, view youtube... The 3d graphics are quite low end, so it's not as real as the paid PC games of today. About my experience: At first I was going with my friend in chat-rooms, they seemed very nice. There were people talking about general stuff, quite like in a real life. Well, I begin to know some girls (yes, virtual girls commanded by a real girl, I hope!). Things happened: Some girls are just crazy, not like in real life, they make out in before even talking; Other girls you can speak a little bit, then they add you to their friend-list. Sometimes they invite to their virtual places. Some girls have really IMVU boyfriends only (but not in reality) and most of them don't even make up in the game, so it's really a level of commitment involved here! But from what my friend told they last for him, at least, about 3 days... Some others have real and IMVU boyfriends that are the same. Until now I haven't find a girl with different boyfriend in the IMVU and reality. Nor multiple boyfriends. There are rooms where the same people find each selves every day and speak about general stuff, relationships and so on... They are nice with you, they "feel" you and show careness. This is what amazes me, they treat you like a real human being and as being their friend in the real world. (of course it's not always like this) There are jealous girls too and competitiveness between females lol, I know you loled! This is kind of social. So today I closed my door in my room and I've played it all day long and guess what, I didn't feel a need to stay with a real person at all. Normally, If I would stay a full day alone I would get quite crazy... So the question is: It's just me that seemed to be able to fulfill my social needs or there is something more? thanks for your precious time for reading my full question,

    Read the article

  • Nvidia GT218 repository drivers don't work

    - by user1042840
    I upgraded all packages with sudo apt-get upgrade command on my Ubuntu 10.04 box and I have Ubuntu 12.04 3.2.0-29-generic-pae now. I have two monitors and the following GPU: 01:00.0 VGA compatible controller: NVIDIA Corporation GT218 [NVS 300] (rev a2) After upgrading to 12.04, I somehow lost my previous setup with one common workspace stretched across two monitors. When Ubuntu starts only one monitor is on. I can see the message on the active monitor: Not optimum mode. Recommended mode: 1680x1050 60Hz I used Nvidia proprietary drivers on 10.04 but now jockey-text --list shows: xorg:nvidia_current - NVIDIA accelerated graphics driver (Proprietary, Disabled, Not in use) xorg:nvidia_current_updates - NVIDIA accelerated graphics driver (post-release updates) (Proprietary, Enabled, Not in use) When I run sudo nvidia-settings it says You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run `nvidia-xconfig` as root), and restart the X server.' I typed nvidia-xconfig and rebooted, but jockey-text --list says the same after the reboot: Not in use. The same with nvidia-current - Enabled but Not in use. I also tried nvidia-173 but I ended up in tty immediately at startup so I removed it. I used to have some problems with Nvidia proprietary drivers on 10.04, I had to put paths to EDID files in /etc/X11/xorg.conf explicitly, but the resolution was as recommended and both monitors were working. If I understand correctly, nouveau drivers are used now by default because the resolution is still quite high, definitely not 800x600, xrandr showed: xrandr: Failed to get size of gamma for output default Screen 0: minimum 320 x 400, current 1600 x 1200, maximum 1600 x 1200 default connected 1600x1200+0+0 0mm x 0mm 1600x1200 66.0* 1280x1024 76.0 1024x768 76.0 800x600 73.0 640x480 73.0 640x400 0.0 320x400 0.0 1680x1050_60.00 (0x4f) 146.2MHz h: width 1680 start 1784 end 1960 total 2240 skew 0 clock 65.3KHz v: height 1050 start 1053 end 1059 total 1089 clock 60.0Hz However, colors seem a bit faded and blurry with nouveau drivers. Mouse cursor is invisible if it's placed inside Firefox window, and only one monitor is working. I like open source and if it's possible I'd prefer to use nouveau drivers but a few things should be fixed. I'm curious why nvidia-current drivers from the repository don't work now. I read it has something to do with the new X11 server in Ubuntu 12.04, is it true? How can I get it back to work?

    Read the article

  • Once installed geos library (C++, and C), and then trying to install rgeos package (R), it reports geos-config missing!

    - by user1873888
    Knowing that the package rgeos, from the R language, requieres a prior installation of geos libraries, I installed, both, libgeos and libgeos-c1 (3.2.2), using the synaptic installer in my Ubuntu 12.04 (32 bit) machine. Then I tried to install rgeos directly from the R console, and it issued a message in the sense that geos-config was not found. The output is as follows: > install.packages("rgeos") Installing package(s) into ‘/home/checo/R/i486-pc-linux-gnu-library/2.15’ (as ‘lib’ is unspecified) also installing the dependency ‘sp’ probando la URL 'http://cran.rstudio.com/src/contrib/sp_1.0-9.tar.gz' Content type 'application/x-gzip' length 882102 bytes (861 Kb) URL abierta ================================================== downloaded 861 Kb probando la URL 'http://cran.rstudio.com/src/contrib/rgeos_0.2-19.tar.gz' Content type 'application/x-gzip' length 221471 bytes (216 Kb) URL abierta ================================================== downloaded 216 Kb * installing *source* package ‘sp’ ... ** package ‘sp’ successfully unpacked and MD5 sums checked ** libs gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c R centroid.c -o Rcentroid.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c gcdist.c -o gcdist.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c init.c -o init.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c pip.c -o pip.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c pip2.c -o pip2.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c sp_xports.c -o sp_xports.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c surfaceArea.c -o surfaceArea.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c zerodist.c -o zerodist.o gcc -std=gnu99 -shared -o sp.so Rcentroid.o gcdist.o init.o pip.o pip2.o sp_xports.o surfaceArea.o zerodist.o -L/usr/lib/R/lib -lR installing to /home/checo/R/i486-pc-linux-gnu-library/2.15/sp/libs ** R ** data ** demo ** inst ** preparing package for lazy loading ** help *** installing help indices ** building package indices ** installing vignettes ‘intro_sp.Rnw’ ‘over.Rnw’ ** testing if installed package can be loaded * DONE (sp) * installing *source* package ‘rgeos’ ... ** package ‘rgeos’ successfully unpacked and MD5 sums checked configure: CC: gcc -std=gnu99 configure: CXX: g++ configure: rgeos: 0.2-17 checking for /usr/bin/svnversion... no configure: svn revision: 394 checking geos-config usability... ./configure: line 1385: geos-config: command not found no configure: error: geos-config not usable ERROR: configuration failed for package ‘rgeos’ * removing ‘/home/checo/R/i486-pc-linux-gnu-library/2.15/rgeos’ Warning in install.packages : installation of package ‘rgeos’ had non-zero exit status Forgive my ignorance, but I don't know where this file, "geos-config", comes from: should it be generated by the gcc compilations above, or should it be previously installed when the libgeos libraries were intalled? I learnt, from another machine, that "geos-config" is an executable and that it should be installed in /usr/bin. Do you have any idea on what's wrong with my procedure? Thanks, -Sergio.

    Read the article

  • Best way to remote restart Ubuntu from Windows machine

    - by robsoft
    Background: I'm looking to put a series of Ubuntu machines into retail locations, they're being used as dumb kiosks to show a series of slides onto large LCD panel TV screens. Once installed, they won't have a keyboard or mouse connected but will have a fixed IP on the local network. Everything is configured to auto-start, no automatic updates, no power saving etc - I think we're pretty-much good to go apart from one thing. I need the retail staff to be able to restart the boxes if a problem arises. We have VNC running (now that we've turned off desktop enhancements!) so that we can remotely get into the machines if we need to, but that's not something we would allow the retail staff to do. The machines are going to be physically 'out of the way' (probably in the ceiling space) so the power button is not easily accessible!. I'd like to have some means of allowing the retail staff to restart the Ubuntu machine, from the desktop of one of their Windows terminals. I don't really want to give them some kind of raw terminal access (the command line will frighten them!) and I don't want them to use VNC (as stated above). Ideally there would be an icon on the Windows desktop, they double-click it, reply to a simple 'are you sure?' prompt, and then the Ubuntu box is told to restart. The Windows side of that won't be a problem, we can write something using Delphi, Python & Qt4, whatever - it's the Ubuntu side of it I'm stuck with. Out of sight/view, could I have a Windows program open a terminal across the network and tell Ubuntu to restart? Is this what SSH could be used for (I have never set that kind of thing up). The Windows programming side isn't really an issue, it's just that I'm a total Ubuntu noob and don't know where to start from the platform point of view. The other thing we considered is also having the machine automatically restart itself at a set time each day (obviously out of store hours!). To me, that seems a bit unnecessary (though forcing a restart once a week/month might be worthwhile). Any thoughts or suggestions? Being able to restart the box on demand across the network is my prime requirement.

    Read the article

  • Jitter during wall collisions with Bullet Physics: contact/penetration tolerance?

    - by Niriel
    I use the bullet physics engine through Panda3d. My scene is still very simple, think 'Wolfenstein3d': tile-based, walls are solid cubes. I expect walls to block the player, and I expect the player to slide along the walls in case of non-normal incidence. What I get is what I expect, with one difference: there is some jitter. If I try to force myself into the wall, then I see the frames blinking quickly between two positions. These differ by about 0.04 units of distance, which corresponds to 4 cm in my game. I noticed a 4 cm elsewhere: the bottom of my player capsule is 4 cm below ground, when at rest. Does that mean that there is somewhere in the Bullet engine a default 0.04-units-long tolerance to differentiate contact from collision? If so, what should I do ? Should I change the scale of my game so that these 0.04 units correspond to 0.4 cm, making the jitter ten times smaller? Or can I ask bullet to change its tolerance to a smaller value? Edit This is the jitter I get: 6.155 - 6.118 = 0.036 LPoint3f(0, 6.11694, 0.835) LPoint3f(0, 6.15499, 0.835) LPoint3f(0, 6.11802, 0.835) LPoint3f(0, 6.15545, 0.835) LPoint3f(0, 6.11817, 0.835) LPoint3f(0, 6.15726, 0.835) LPoint3f(0, 6.11876, 0.835) LPoint3f(0, 6.15911, 0.835) LPoint3f(0, 6.11937, 0.835) I found a setMargin method. I set it to 5 mm both on the BoxShape for the walls and on the Capsule shape for the player. It still jitters by about 35 mm as illustrated by this log (11.117 - 11.082 = 0.035): LPoint3f(0, 11.0821, 0.905) LPoint3f(0, 11.1169, 0.905) LPoint3f(0, 11.082, 0.905) LPoint3f(0, 11.117, 0.905) LPoint3f(0, 11.082, 0.905) LPoint3f(0, 11.117, 0.905) LPoint3f(0, 11.0821, 0.905) LPoint3f(0, 11.1175, 0.905) LPoint3f(0, 11.0822, 0.905) LPoint3f(0, 11.1178, 0.905) LPoint3f(0, 11.0823, 0.905) LPoint3f(0, 11.1183, 0.905) The margin on the capsule did change my penetration with the floor though, I'm a bit higher (0.905 instead of 0.835). However, it did not change anything when colliding with the walls. How can I make the collisions against the walls less jittery? Edit, the day after: After more investigation, it appears that dynamic objects behave well. My problem comes from the btKinematicCharacterController that I use for moving my character; that stuff is totally bugged, according to the whole Internet :/.

    Read the article

  • Contract / Project / Line-Item hierarchy design considerations

    - by Ryan
    We currently have an application that allows users to create a Contract. A contract can have 1 or more Project. A project can have 0 or more sub-projects (which can have their own sub-projects, and so on) as well as 1 or more Line. Lines can have any number of sub-lines (which can have their own sub-lines, and so on). Currently, our design contains circular references, and I'd like to get away from that. Currently, it looks a bit like this: public class Contract { public List<Project> Projects { get; set; } } public class Project { public Contract OwningContract { get; set; } public Project ParentProject { get; set; } public List<Project> SubProjects { get; set; } public List<Line> Lines { get; set; } } public class Line { public Project OwningProject { get; set; } public List ParentLine { get; set; } public List<Line> SubLines { get; set; } } We're using the M-V-VM "pattern" and use these Models (and their associated view models) to populate a large "edit" screen where users can modify their contracts and the properties on all of the objects. Where things start to get confusing for me is when we add, for example, a Cost property to the Line. The issue is reflecting at the highest level (the contract) changes made to the lowest level. Looking for some thoughts as to how to change this design to remove the circular references. One thought I had was that the contract would have a Dictionary<Guid, Project> which would contain ALL projects (regardless of their level in hierarchy). The Project would then have a Guid property called "Parent" which could be used to search the contract's dictionary for the parent object. THe same logic could be applied at the Line level. Thanks! Any help is appreciated.

    Read the article

  • Creating a shared library that might be used with desktop applications and web projects

    - by dreza
    I have been involved in a number of MVC.NET and c# desktop projects in our company over the last year or so while also managing to kept my nose poked into other projects (in a read-only learning capacity of course). From this I've noticed that across the various projects and teams there is a-lot of functionality that has been well designed against good interfaces and abstractions. Because we tend to like our own work at times, I noticed a couple of projects had the exact same class, method copied into it as it had obviously worked on one and so was easily moved to a new project (probably by the same developer who originally wrote it) I mentioned this fact in one of our programmer meetings we have occasionally and suggested we pull some of this functionality into a core company library that we can build up over time and use across multiple projects. Everyone agreed and I started looking into this possibility. However, I've come across a stumbling block pretty early on. Our team primarily focuses on MVC at the moment and we have projects mainly in 2.0 but are starting to branch to 3.0. We also have a number of desktop applications that might benefit from some shared classes and basic helper methods. Initially when creating this DLL I included some shared classes that could be used across any project type (Web, Client etc) but then I started looking at adding some shared modules that would be useful in our MVC applications only. However this meant I had to include a reference to some Microsoft Web DLL's in order to leverage some of the classes I was creating (at this stage MVC 2.0). Now my issue is that we have a shared DLL that has references to web specific libraries that could also possibly be used in a client application. Not only that, our DLL referenced initially MVC 2.0 and we will eventually move onto MVC 3.0 for all projects. But alot of the classes in this library I expect to still be relevant to MVC 3 etc Our code within this DLL is separated into it's own namespaces such as: CompanyDLL.Primitives CompanyDLL.Web.Mvc CompanyDLL.Helpers etc etc So, my questions are: Is it OK to do a shared library like this, or if we have web specific features in it should we create a separate web DLL only targeted at a specific framework or MVC version? If it's OK, what kind of issues might we face when using the library that references MVC 2 in a MVC 3 project for example. I would be thinking that we might run into some sort of compatibility issue, or even issues where the developers using the library doesn't realize they need MVC 2.0 libraries. They might only want to use some of the generic classes etc The concept seemed like a good idea at the time, but I'm starting to think maybe it's not really a practical solution. But the number of times I've seen copied classes and methods across projects because they are proven tested code is a bit unnerving to be perfectly honest!

    Read the article

  • So, I though I wanted to learn frontend/web development and break out of my comfort zone...

    - by ripper234
    I've been a backend developer for a long time, and I really swim in that field. C++/C#/Java, databases, NoSql, caching - I feel very much at ease around these platforms/concepts. In the past few years, I started to taste end-to-end web programming, and recently I decided to take a job offer in a front end team developing a large, complex product. I wanted to break out of my comfort zone and become more of an "all around developer". Problem is, I'm getting more and more convinced I don't like it. Things I like about backend programming, and missing in frontend stuff: More interesting problems - When I compare designing a server that handle massive data, to adding another form to a page or changing the validation logic, I find the former a lot more interesting. Refactoring refactoring refactoring - I am addicted to Visual Studio with Resharper, or IntelliJ. I feel very comfortable writing code as it goes without investing too much thought, because I know that with a few clicks I can refactor it into beautiful code. To my knowledge, this doesn't exist at all in javascript. Intellisense and navigation - I hate looking at a bunch of JS code without instantly being able to know what it does. In VS/IntelliJ I can summon the documentation, navigate to the code, climb up inheritance hiererchies ... life is sweet. Auto-completion - Just hit Ctrl-Space on an object to see what you can do with it. Easier to test - With almost any backend feature, I can use TDD to capture the requirements, see a bunch of failing tests, then implement, knowing that if the tests pass I did my job well. With frontend, while tests can help a bit, I find that most of the testing is still manual - fire up that browser and verify the site didn't break. I miss that feeling of "A green CI means everything is well with the world." Now, I've only seriously practiced frontend development for about two months now, so this might seem premature ... but I'm getting a nagging feeling that I should abandon this quest and return to my comfort zone, because, well, it's so comfy and fun. Another point worth mentioning in this context is that while I am learning some frontend tools, a lot of what I'm learning is our company's specific infrastructure, which I'm not sure will be very useful later on in my career. Any suggestions or tips? Do you think I should give frontend programming "a proper chance" of at least six to twelve months before calling it quits? Could all my pains be growing pains, and will they magically disappear as I get more experienced? Or is gaining this perspective is valuable enough, even if plan to do more "backend stuff" later on, that it's worth grinding my teeth and continuing with my learning?

    Read the article

  • Methodology to understanding JQuery plugin & API's developed by third parties

    - by Taoist
    I have a question about third party created JQuery plug ins and API's and the methodology for understanding them. Recently I downloaded the JQuery Masonry/Infinite scroll plug in and I couldn't figure out how to configure it based on the instructions. So I downloaded a fully developed demo, then manually deleted everything that wouldn't break the functionality. The code that was left allowed me to understand the plug in much greater detail than the documentation. I'm now having a similar issue with a plug in called JQuery knob. http://anthonyterrien.com/knob/ If you look at the JQuery Knob readme file it says this is working code: $(function() { $('.dial') .trigger( 'configure', { "min":10, "max":40, "fgColor":"#FF0000", "skin":"tron", "cursor":true } ); }); But as far as I can tell it isn't at all. The read me also says the Plug in uses Canvas. I am wondering if I am suppose to wrap this code in a canvas context or if this functionality is already part of the plug in. I know this kind of "question" might not fit in here but I'm a bit confused on the assumptions around reading these kinds of documentation and thought I would post the query regardless. Curious to see if this is due to my "newbi" programming experience or if this is something seasoned coders also fight with. Thank you. Edit In response to Tyanna's reply. I modified the code and it still doesn't work. I posted it below. I made sure that I checked the Google Console to insure the basics were taken care of, such as not getting a read-error on the library. <!DOCTYPE html> <meta charset="UTF-8"> <title>knob</title> <link rel="stylesheet" href="http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/themes/hot-sneaks/jquery-ui.css" type="text/css" /> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.js" charset="utf-8"></script> <script src="https://ajax.googleapis.com/ajax/libs/jqueryui/1.8.21/jquery-ui.min.js"></script> <script src="js/jquery.knob.js"></script> <div id="button1">test </div> <script> $(function() { $("#button1").click(function () { $('.dial').trigger( 'configure', { "min":10, "max":40, "fgColor":"#FF0000", "skin":"tron", "cursor":true } ); }); }); </script>

    Read the article

  • Small script to look for Project Replication actions that have failed

    - by Trond Strømme
    Today when looking at a couple of projects on a ZFS 7320 Storage Appliance I noticed on one project that one of its replication actions had failed, as I hadn't checked the Recent Alerts log yet I was not aware of this. I decided to write a small script to check if there were others that had failed. Nothing fancy, just a loop through all projects, look at the project's replication child and compare the values of the last_sync and last_try properties and print the result if they're not equal. (There are probably more sensible ways of doing this, but at least it involves me getting the chance to put on my headphones and doing just a little bit of coding.) script // this script will locate failed project level replication // it will look at the sync times for 'last_sync' and 'last_try' // and compare these, if they deviate you should investigate. // NOTE! this code is offered 'as is' Run at your own risk, // it will probably work as intended, but in now way can I // (or Oracle) be held responsible if your server starts behaving // like a three year old kid in a candy store.. (not that mine do, // they are very well behaved boys...) run('configuration'); run('storage'); printf('Host: %s, pool: %s\n', get('owner'),get('pool')); run('cd /'); run('shares'); proj=list(); printf("total projects: %d\n",proj.length +'\n'); // just for project level replication for(i=0;i<proj.length;i++){ run('select '+proj[i]); run('replication'); //get all replication actions preps = list(); for(j=0;j<preps.length;j++){ run('select ' + preps[j]); last_sync = get('last_sync'); last_try = get('last_try'); // printf("target %s\n", get('target')); //why the flip does this not get the proper name? if(!( last_sync.valueOf() === last_try.valueOf())){ printf("sync has failed for %s %s\n", proj[i], get('target')); }else{ // printf("OK %s %s\n", proj[i], get('target')); } run('done'); //done with the replica action } run('done'); run('done'); } printf("finished\n"); For a more on how to run the script, or testing it please look at my previous post. Sample output: Host: elb1sn01, pool: exalogic total projects: 45 sync has failed for ACSExalogicSystem cb3a24fe-ad60-c90f-d15d-adaafd595639 finished

    Read the article

  • New ZFS Encryption features in Solaris 11.1

    - by darrenm
    Solaris 11.1 brings a few small but significant improvements to ZFS dataset encryption.  There is a new readonly property 'keychangedate' that shows that date and time of the last wrapping key change (basically the last time 'zfs key -c' was run on the dataset), this is similar to the 'rekeydate' property that shows the last time we added a new data encryption key. $ zfs get creation,keychangedate,rekeydate rpool/export/home/bob NAME PROPERTY VALUE SOURCE rpool/export/home/bob creation Mon Mar 21 11:05 2011 - rpool/export/home/bob keychangedate Fri Oct 26 11:50 2012 local rpool/export/home/bob rekeydate Tue Oct 30 9:53 2012 local The above example shows that we have changed both the wrapping key and added new data encryption keys since the filesystem was initially created.  If we haven't changed a wrapping key then it will be the same as the creation date.  It should be obvious but for filesystems that were created prior to Solaris 11.1 we don't have this data so it will be displayed as '-' instead. Another change that I made was to relax the restriction that the size of the wrapping key needed to match the size of the data encryption key (ie the size given in the encryption property).  In Solaris 11 Express and Solaris 11 if you set encryption=aes-256-ccm we required that the wrapping key be 256 bits in length.  This restriction was unnecessary and made it impossible to select encryption property values with key lengths 128 and 192 when the wrapping key was stored in the Oracle Key Manager.  This is because currently the Oracle Key Manager stores AES 256 bit keys only.  Now with Solaris 11.1 this restriciton has been removed. There is still one case were the wrapping key size and data encryption key size will always match and that is where they keysource property sets the format to be 'passphrase', since this is a key generated internally to libzfs and to preseve compatibility on upgrade from older releases the code will always generate a wrapping key (using PKCS#5 PBKDF2 as before) that matches the key length size of the encryption property. The pam_zfs_key module has been updated so that it allows you to specify encryption=off. There were also some bugs fixed including not attempting to load keys for datasets that are delegated to zones and some other fixes to error paths to ensure that we could support Zones On Shared Storage where all the datasets in the ZFS pool were encrypted that I discussed in my previous blog entry. If there are features you would like to see for ZFS encryption please let me know (direct email or comments on this blog are fine, or if you have a support contract having your support rep log an enhancement request).

    Read the article

  • Alert: It is No Longer 1982, So Why is CRM Still There?

    - by Mike Stiles
    Hot off the heels of Oracle’s recent LinkedIn integration announcement and Oracle Marketing Cloud Interact 2014, the Oracle Social Cloud is preparing for another big event, the CRM Evolution conference and exhibition in NYC. The role of social channels in customer engagement continues to grow, and social customer engagement will be a significant theme at the conference. According to Paul Greenberg, CRM Evolution Conference Chair, author, and Managing Principal at The 56 Group, social channels have become so pervasive that there is no longer a clear reason to make a distinction between “social CRM” and traditional CRM systems. Why not? Because social is a communication hub every bit as vital and used as the phone or email. What makes social different is that if you think of it as a phone, it’s a party line. That means customer interactions are far from secret, and social connections are listening in by the hundreds, hearing whether their friend is having a positive or negative experience with your brand. According to a Mention.com study, 76% of brand mentions are neutral, neither positive nor negative. These mentions fail to get much notice. So think what that means about the remaining 24% of mentions. They’re standing out, because a verdict, about you, is being rendered in them, usually with emotion. Suddenly, where the R of CRM has been lip service and somewhat expendable in the past, “relationship” takes on new meaning, seriousness, and urgency. Remarkably, legions of brands still approach CRM as if it were 1982. Today, brands must provide customer experiences the customer actually likes (how dare they expect such things). They must intimately know not only their customers, but each customer, because technology now makes personalized experiences possible. That’s why the Oracle Social Cloud has been so mission-oriented about seamlessly integrating social with sales, marketing and customer service interactions so the enterprise can have an actionable 360-degree view of the customer. It’s the key to that customer-centricity we hear so much about these days. If you’re attending CRM Evolution, Chris Moody, Director of Product Marketing for the Oracle Marketing Cloud, will show you how unified customer experiences and enhanced customer centricity will help you attract and keep ideal customers and brand advocates (“The Pursuit of Customer-Centricity” Aug 19 at 2:45p ET) And Meg Bear, Group Vice President for the Oracle Social Cloud, will sit on a panel talking about “terms of engagement” and the ways tech can now enhance your interactions with customers (Aug 20 at 10a ET). If you can’t be there, we’ll be doing our live-tweeting thing from the @oraclesocial handle, so make sure you’re a faithful follower. You’ll notice NOBODY is writing about the wisdom of “company-centricity.” Now is the time to bring your customer relationship management into the socially connected age. @mikestilesPhoto: Sue Pizarro, freeimages.com

    Read the article

  • How to explain bad software to non-technical people?

    - by mtutty
    In discussing software development with non-technical people (customers, business owners, project sponsors, etc.), I often resort to analogies and metaphors. It's relatively easy and effective to use a "house" or other metaphor for describing the size and complexity of new development. However, we often inherit someone else's code or data, and this approach doesn't seem to hold up as well when trying to explain why we're gutting something that already seems to work. Of course we can point to cycle time and cost to be saved in the future but this generally means nothing to business folks. I know doctors can say "just take this pill," but I'm not sure that software devs have the same authority. Ideas? EDIT: Let me add a bit to the discussion. The specific project I'm talking about has customers that don't realize (or care) about specific aspects of the system we're retiring (i.e., they think it was just fine): The system would save a NEW RECORD every time someone updated a field The system contained tables for reference data. These tables had new records added every day, even though they were duplicates of previous records. And there was no way to tie the reference data used for a particular case at the time it was closed. This is like 99% of the data in the old system. The field NAMES also have spaces, apostrophes and other inappropriate characters in them, making everything harder to work with. In addition to the incredible amount of duplicate data, they have around 1000 XLS files with data they want added to the system. Previously, they would do a spreadsheet for each case in the database, IN ADDITION TO what they typed into the database. Getting rid of this old, unneeded information and piping in the XLS data comprises about 80% of the total project effort, and was not something we could accurately predict. I'm trying to find a concrete way to describe how bad this thing was, mostly so that the customer will understand why the migration process has been so time-consuming. The actual coding was done pretty quickly and the new system works fine, but without the old data they won't be happy. Sorry to get into the weeds, but most of the answers I've seen so far are pretty basic scope/schedule/cost things. I've been doing this for 15 years, so this really is more of a reflective, philosophical question - but without some of the details it can be difficult to really appreciate the awful beauty of this problem.

    Read the article

  • The Minimalist Approach to Content Governance - Create Phase

    - by Kellsey Ruppel
     Originally posted by John Brunswick. In this installment of our Minimalist Approach to Content Governance we finally get to the fun part of the content creation process! Once the content requester has addressed the items outlined in the Request Phase it is time to setup and begin the production of content.   For this to be done correctly it is important the the content be assigned appropriate workflow and security information. As in our prior phase, let's take a look at what can be done to streamline this process - as contributors are focused on getting information to their end users as quickly as possible. This often means that details around how to ensure that the materials are properly managed can be overlooked, but fortunately there are some techniques that leverage our content management system's native capabilities to automatically take care of some of the details. 1. Determine Access Why - Even if content is not something that needs to restricted due to security reasons, it is helpful to apply access rights so that the content ends up being visible only to users that it relates to. This will greatly improve user experience. For instance, if your team is working on a group project many of your fellow company employees do not need to see the content that is being worked on for that project. How - Make use of native content features that allow propagation of security and meta data from parent folders within your content system that have been setup for your particular effort. This makes it painless to enforce security, as well as meta data policies for even the most unorganized users. The default settings at a parent level can be set once the content creation request has been accepted and a location in the content management system is assigned for your specific project. Impact - Users can find information will less effort, as they will only be exposed to what they need for their work and can leverage advanced search features to take advantage of meta data assigned to content. The combination of default security and meta data will also help in running reports against the content in the Manage and Retire stages that we will discuss in the next 2 posts. 2. Assign Workflow (optional depending on nature of content) Why - Every case for workflow is going to be a bit different, but it generally involves ensuring that content conforms to management, legal and or editorial requirements. How - Oracle's Universal Content Management offers two ways of helping to workflow content without much effort. Workflow can be applied to content based on Criteria acting on meta data or explicitly assigned to content with a Basic workflow. Impact - Any content that needs additional attention before release is addressed, allowing users to comment and version until a suitable result is reached. By using inheritance from parent folders within the content management system content can automatically be given the right security, meta data and workflow information for a particular project's content. This relieves the burden of doing this for every piece of content from management teams and content contributors. We will cover more about the management phase within the content lifecycle in our next installment.

    Read the article

  • Many sources of movement in an entity system

    - by Sticky
    I'm fairly new to the idea of entity systems, having read a bunch of stuff (most usefully, this great blog and this answer). Though I'm having a little trouble understanding how something as simple as being able to manipualate the position of an object by an undefined number of sources. That is, I have my entity, which has a position component. I then have some event in the game which tells this entity to move a given distance, in a given time. These events can happen at any time, and will have different values for position and time. The result is that they'd be compounded together. In a traditional OO solution, I'd have some sort of MoveBy class, that contains the distance/time, and an array of those inside my game object class. Each frame, I'd iterate through all the MoveBy, and apply it to the position. If a MoveBy has reached its finish time, remove it from the array. With the entity system, I'm a little confused as how I should replicate this sort of behavior. If there were just one of these at a time, instead of being able to compound them together, it'd be fairly straightforward (I believe) and look something like this: PositionComponent containing x, y MoveByComponent containing x, y, time Entity which has both a PositionComponent and a MoveByComponent MoveBySystem that looks for an entity with both these components, and adds the value of MoveByComponent to the PositionComponent. When the time is reached, it removes the component from that entity. I'm a bit confused as to how I'd do the same thing with many move by's. My initial thoughts are that I would have: PositionComponent, MoveByComponent the same as above MoveByCollectionComponent which contains an array of MoveByComponents MoveByCollectionSystem that looks for an entity with a PositionComponent and a MoveByCollectionComponent, iterating through the MoveByComponents inside it, applying/removing as necessary. I guess this is a more general problem, of having many of the same component, and wanting a corresponding system to act on each one. My entities contain their components inside a hash of component type - component, so strictly have only 1 component of a particular type per entity. Is this the right way to be looking at this? Should an entity only ever have one component of a given type at all times?

    Read the article

  • My First 5K, the recap.

    - by Chris Williams
    It was a nice day to be outside (and trust me, those aren't words you'll hear from me often.) I got to the site around 7:45, hit the pre-reg table and got my number along with a goody bag full of coupons for racing gear, a water bottle and a tshirt. Oh and a map. Stashed all that stuff in the jeep, emptied my pockets of everything but my iPhone and my jeep key, and proceeded to walk around for a bit as people started showing up and signing in. It was fairly breezy, and there was definitely a storm coming... but it was anyone's guess on when it would actually arrive. It was interesting to see everyone who was participating. If I had to guess, I would say the event was 60-70% women, with a pretty broad distribution of age... as young as 13 to well over 60 (in both genders.) I don't know exactly how many folks were there, but it was well over 300. Eventually it was time to kick things off, and everyone made their way to the start line. All of the 5k and 10k runners were mixed together, starting at the same time. All the walkers and the people with strollers or dogs were in the back. It was pretty chaotic at first, once things started, but it thinned out fairly quickly. The 10k people and the hardcore runners sped ahead of everyone else and the walkers gradually lagged behind. The 5K course was pretty nice, winding around a lake down in Eden Prairie. The 10K course overlapped most of ours, but branched off a couple times too. I didn't run the whole time, but I started the race running and I ended it running, and did a mix of walking and running along the way. I met my goals, which were a) don't ever stop and b) don't be last. The weather managed to hold out for the entire race. It never got too hot, there was a nice breeze and it was mostly overcast. Pretty much perfect in my book. About 20-30 minutes after I left, the rain came down pretty hard. I had a good time, and will most likely do more of them. We'll see.

    Read the article

< Previous Page | 556 557 558 559 560 561 562 563 564 565 566 567  | Next Page >