Search Results

Search found 2282 results on 92 pages for 'walking man'.

Page 80/92 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Pthread - setting scheduler parameters

    - by Andna
    I wanted to use read-writer locks from pthread library in a way, that writers have priority over readers. I read in my man pages that If the Thread Execution Scheduling option is supported, and the threads involved in the lock are executing with the scheduling policies SCHED_FIFO or SCHED_RR, the calling thread shall not acquire the lock if a writer holds the lock or if writers of higher or equal priority are blocked on the lock; otherwise, the calling thread shall acquire the lock. so I wrote small function that sets up thread scheduling options. void thread_set_up(int _thread) { struct sched_param *_param=malloc(sizeof (struct sched_param)); int *c=malloc(sizeof(int)); *c=sched_get_priority_min(SCHED_FIFO)+1; _param->__sched_priority=*c; long *a=malloc(sizeof(long)); *a=syscall(SYS_gettid); int *b=malloc(sizeof(int)); *b=SCHED_FIFO; if (pthread_setschedparam(*a,*b,_param) == -1) { //depending on which thread calls this functions, few thing can happen if (_thread == MAIN_THREAD) client_cleanup(); else if (_thread==ACCEPT_THREAD) { pthread_kill(params.main_thread_id,SIGINT); pthread_exit(NULL); } } } sorry for those a,b,c but I tried to malloc everything, still I get SIGSEGV on the call to pthread_setschedparam, I am wondering why?

    Read the article

  • Lost in Nodester Installation

    - by jslamka
    I am trying to install my own version of Nodester. I have tried on Ubuntu 12.04 LTS and now with CentOS. I am not the most skilled Linux user (~2 months use) so I am at a loss at this point. The instructions are located at https://github.com/nodester/nodester/wiki/Install-nodester#wiki-a. They ask you to "export paths (to make npm work)" with the lines necessary to accomplish this. cd ~ echo -e "root = ~/.node_libraries\nmanroot = ~/local/share/man\nbinroot = ~/bin" > ~/.npmrc echo -e "export PATH=3d9c7cfd35d3628e0aa233dec9ce9a44d2231afcquot;\${PATH}:~/bin3d9c7cfd35d3628e0aa233dec9ce9a44d2231afcquot;;" >> ~/.bashrc source ~/.bashrc I can accomplish all of this until I get to the source ~/.bashrc line. When I run that, I get the following: [root@MYSERVER ~]# source ~/.bashrc -bash: /root/.bashrc: line 13: syntax error near unexpected token ';;' -bash: /root/.bashrc: line 13: 'export PATH=3d9c7cfd35d3628e0aa233dec9ce9a44d2231afcquot;${PATH}:~/bin3d9c7cfd35d3628e0aa233dec9ce9a44d2231afcquot;; I have tried changing the quot; to " and that didn't help. I tried changing quot; to colons and that didn't help. I also removed that and it didn't help (I am sure many of you at this point are probably wondering why I would even try those things). Does anyone have any insight as to what I need to do to get this to run properly?

    Read the article

  • Compile Apache 2.4.3 on Centos 6.2 (64bit)

    - by RiseCakoPlusplus
    I attempt to compile Apache 2.4.3 with apr-1.4.6 and apr-util-1.5.1 on Centos 6.2 (64bit). ./configure --build=x86_64-unknown-linux-gnu --host=x86_64-unknown-linux-gnu --target=x86_64-redhat-linux-gnu --program-prefix= --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/var/lib --mandir=/usr/share/man --infodir=/usr/share/info --cache-file=../config.cache --with-libdir=lib64 --with-config-file-path=/etc --with-config-file-scan-dir=/etc/php.d --disable-debug --with-pic --disable-rpath --without-pear --with-bz2 --with-exec-dir=/usr/bin --with-freetype-dir=/usr --with-png-dir=/usr --with-xpm-dir=/usr --enable-gd-native-ttf --with-t1lib=/usr --without-gdbm --with-gettext --with-gmp --with-iconv --with-jpeg-dir=/usr --with-openssl --with-zlib --with-layout=GNU --enable-exif --enable-ftp --enable-magic-quotes --enable-sockets --with-kerberos --enable-ucd-snmp-hack --enable-shmop --enable-calendar --with-libxml-dir=/usr --enable-xml --with-system-tzdata --with-mhash --with-apxs2=/usr/sbin/apxs --libdir=/usr/lib64/php --enable-pdo=shared --with-mysql=shared,/usr --with-mysqli=shared,/usr/lib64/mysql/mysql_config --with-pdo-mysql=shared,/usr/lib64/mysql/mysql_config --without-pdo-sqlite --without-gd --disable-dom --disable-dba --without-unixODBC --disable-xmlreader --disable-xmlwriter --without-sqlite3 --disable-phar --disable-fileinfo --disable-json --without-pspell --disable-wddx --without-curl --disable-posix --disable-sysvmsg --disable-sysvshm --disable-sysvsem ./configure --with-included-apr --with-included-apr-util and when I issue make this happen: /root/httpd-2.4.3/srclib/apr/libtool: line 5989: cd: yes/lib: No such file or directory libtool: link: cannot determine absolute directory name of yes/lib' make[3]: *** [libaprutil-1.la] Error 1 make[3]: Leaving directory/root/httpd-2.4.3/srclib/apr-util' make[2]: * [all-recursive] Error 1 make[2]: Leaving directory /root/httpd-2.4.3/srclib/apr-util' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory/root/httpd-2.4.3/srclib' make: * [all-recursive] Error 1 anything I missed?

    Read the article

  • Operator Overloading << in c++

    - by thlgood
    I'm a fresh man in C++. I write this simple program to practice Overlaoding. This is my code: #include <iostream> #include <string> using namespace std; class sex_t { private: char __sex__; public: sex_t(char sex_v = 'M'):__sex__(sex_v) { if (sex_v != 'M' && sex_v != 'F') { cerr << "Sex type error!" << sex_v << endl; __sex__ = 'M'; } } const ostream& operator << (const ostream& stream) { if (__sex__ == 'M') cout << "Male"; else cout << "Female"; return stream; } }; int main(int argc, char *argv[]) { sex_t me('M'); cout << me << endl; return 0; } When I compiler it, It print a lots of error message: The error message was in a mess. It's too hard for me to found useful message sex.cpp: ???‘int main(int, char**)’?: sex.cpp:32:10: ??: ‘operator<<’?‘std::cout << me’????? sex.cpp:32:10: ??: ???: /usr/include/c++/4.6/ostream:110:7: ??: std::basic_ostream<_CharT, _Traits>::__ostream_type& std::basic_ostream<_CharT, _Traits>::operator<<(std::basic_ostre

    Read the article

  • C++: how serialize/deserialize objects without any library?

    - by Winte Winte
    I try to understand how serializing/deserializing works in c++, so I want to do that without any libs. But I really stuck with that. I start with simple objects but when I try to deserilize a vector I understand that I can't get a vector without if I don't write it size before. Moreover, I don't know which format of file I should choose because if digits will be existed before size of vector I haven't chance to read it right. However it is only the vector but I also want to do that with classes and map container. My task is serialize/deserialize a objects as this: PersonInfo { unsigned int age_; string name_; enum { undef, man, woman } sex_; } Person : PersonInfo { vector<Person> children_; map<string, PersonInfo> addrBook_; } Currently I know how to serialize simple objects in way as this: vector<PersonInfo> vecPersonInfo; vecPersonInfo.push_back(*personInfo); vecPersonInfo.push_back(*oneMorePersonInfo); ofstream file("file", ios::out | ios::binary); if (!file) { cout<<"can not open file"; } else { vector<PersonInfo>::const_iterator iterator = vecPersonInfo.begin(); for (; iterator != vecPersonInfo.end(); iterator++) { file<<*iterator; } Could you please suggest how I can do this for this conplex object or a good tutorial that explain it clearly?

    Read the article

  • SharePoint - Summing Calculated Columns By Groups (DVWP)

    - by Mark Rackley
    I had a problem… okay.. okay.. so I have many problems… but let’s focus on one in particular or this blog post would never end… okay? Thank you…. So, I had an electronic timesheet where users entered hours for each day of the week. It also had a “Week Total” column which was a calculated column of the sum. The calculated column looked like this: Pretty easy.. nothing spectacular. So, what’s the problem? WELL……………….. There is a row in the timesheet for each task a person worked on in a given week. So, if you worked on 4 tasks, you would have 4 rows of data, and 4 week totals for that week: This is all fine and dandy, but I want to know what the total was for the entire week. Yes.. I realize the answer is 24 from my example… I mean, I know how to add! I just want SharePoint to display it for me for the executives (we all know, they have math problems).  You may be thinking, hey genius (in a sarcastic tone of course), why don’t you just go to the view and total on the “Week Total” field. What a brilliant idea! Why didn’t I think of that… let’s go to the view and do just that…. Ohhhhhh… you can’t total on a Calculated Column.. it’s not even an option…  Yeah… I had the same moment. So, what do you do? Well… what do you think I did? 1) Googled “SharePoint total calculated column” 2) Said it couldn’t be done 3) Took a nap 4) Asked the question on twitter? The correct answer of course is number 4… followed by number 3… although I may have told my boss number 2 so that I look more brilliant than I am? It’s safe to say I did NOT try to find the solution on my own doing step 1… that would be just WAY to easy… So, anyway, I posted the question on Twitter and it turns out several people had suggestions from using jQuery to using DVWPs. I tend to be a big fan of the DVWP except for the disgusting process of deploying them to another farm.. ugh… just shoot me…. so, that is the solution I went with. Laura Rogers (@WonderLaura) has a super duper easy to follow video on the subject over at EndUserSharePoint.com: SharePoint: Displaying Calculated Column SUMS in a View (Screencast) Laura’s video was very easy to follow and was ALMOST exactly what I needed. She does a great job walking you through every step of summing up a calculated field which was PART of my problem. The other part was my list is grouped by date! So, I wanted to see for a given week, the summed “Week Total” of hours. Laura got me on the right track with her video and I dug a little deeper into the DVWP to accomplish my task. So, here are the steps you follow: 1. Click on the "chevron” (I didn’t know it was actually called that until I heard Laura say it).. I always call it the “little-button-in-the-top-right-corner-with-the-greater-than-sign”.. but “chevron” is much shorter. So, click on the chevron, click on “Sort and Group”. The Add the field you want to group by, in my example it is the “Monday Date” of the timesheet entry. Make sure to check the check boxes for “Show Group Header” AND “Show Group Footer”. Click “OK”. The view now shows the count of each grouped set of data: Interesting, this looks very similar to Laura’s video… right? So, let’s take a look at the code for the Count: Count : <xsl:value-of select="count($nodeset)" /> Wow, also very similar… except in Laura’s video it looks like: Count : <xsl:value-of select="count($Rows)" /> So.. the only difference is that instead of $Rows we have $nodeset. It turns out the $nodeset will go through each Row in the group just like $Rows goes through each row in the entire view. So, using the exact same logic as in Laura’s blog except replacing $Rows with $nodeset we get the functionality of being able to sum up the values for a group. So, I want to replace “Count: #” with the total hours, this is done using the following changes to the above code: Week Total : <xsl:value-of select="sum($nodeset/@Monday)+sum($nodeset/@Tuesday) +sum($nodeset/@Wednesday)+sum($nodeset/@Thursday)+sum($nodeset/@Friday) +sum($nodeset/@Saturday)+sum($nodeset/@Sunday)" /> Our final output has the summed hours for each group! So… long story short… follow Laura’s blog, then group your list, then replace “$Rows” with “$nodeset”. One caveat, this will not work if you group by a person field. For some reason the person field does not go through each row in the group. I haven’t dug into this much yet. Maybe if I find some time… whatever that is… Anyway, Laura did all the work, I just took it one small step forward… as always, feel free to leave any additional insights you may have. We’re all learning here!

    Read the article

  • Visiting the Emtel Data Centre

    Back in February at the first event of the Emtel Knowledge Series (EKS) I spoke to various people at Emtel about their data centre here on the island. I was trying to see whether it would be possible to arrange a meeting over there for a selected group of our community members. Well, let's say it like this... My first approach wasn't that promising and far from successful but during the following months there were more and more occasions to get in touch with the "right" contact persons at Emtel to make it happen... Setting up an appointment and pre-requisites The major improvement came during a Boot Camp for Windows Phone 8.1 App development organised by Microsoft Indian Ocean Islands in cooperation with Emtel at the Emtel World, Ebene. Apart from learning bits and pieces regarding Universal Apps I took the opportunity to get in touch with Arvin Lockee, Sales Executive - Data, during our lunch break. And this really kicked off the whole procedure. Prior to get access to the Emtel data centre it is requested that you provide full name and National ID of anyone going to visit. Also, it should be noted that there was only a limited amount of seats available. Anyways, packed with this information I posted through the usual social media channels. Responses came in very quickly and based on First-come, first-serve (FCFS) principle I noted down the details and forwarded them to Emtel in order to fix a date and time for the visit. In preparation on our side, all attendees exchanged contact details and we organised transport options to go to the data centre in Arsenal. The day before and on the day of our meeting, Arvin send me a reminder to check whether everything is still confirmed and ready to go... Of course, it was! Arriving at the Emtel Data Centre As I'm coming from Flic En Flac towards the North, we agreed that I'm going to pick up a couple of young fellows near the old post office in Port Louis. All went well, except that Sean eventually might be living in another time zone compared to the rest of us. Anyway, after some extended stop we were complete and arrived just in time in Arsenal to meet and greet with Ish and Veer. Again, Emtel is taking access procedures to their data centre very serious and the gate stayed close until all our IDs had been noted and compared to the list of registered attendees. Despite having a good laugh at the mixture of old and new ID cards it was a straight-forward processing. The ward was very helpful and guided us to the waiting area at the entrance section of the building. Shortly after we were welcomed by Kamlesh Bokhoree, the Data Centre Officer. He gave us brief introduction into the rules and regulations during our visit, like no photography allowed, not touching the buttons, and following his instructions through the whole visit. Of course! Inside the data centre Next, he explained us the multi-factor authentication system using a combination of bio-metric data, like finger print reader, and "classic" pin panel. The Emtel data centre provides multiple services and next to co-location for your own hardware they also offer storage options for your backup and archive data in their massive, fire-resistant vault. Very impressive to get to know about the considerations that have been done in choosing the right location and how to set up the whole premises. It should also be noted that there is 24/7 CCTV surveillance inside and outside the buildings. Strengths of the Emtel TIER 3 Data Centre, Mauritius Finally, we were guided into the first server room. And wow, the whole setup is cleverly planned and outlined in the architecture. From the false floor and ceilings in order to provide optimum air flow, over to the separation of cold and hot aisles between the full-size server racks, and of course the monitored air conditions in order to analyse and watch changes in temperature, smoke detection and other parameters. And not surprisingly everything has been implemented in two independent circuits. There is a standardised classification for the construction and operation of data centres world-wide, and the Emtel's one has been designed to be a TIER 4 building but due to the lack of an alternative power supplier on the island it is officially registered as a TIER 3 compliant data centre. Maybe in the long run there might be a second supplier of energy next to CEB... time will tell. Luckily, the data centre is integrated into the National Fibre Optic Gigabit Ring and Emtel already connects internationally through diverse undersea cable routes like SAFE & LION/LION2 out of Mauritius and through several other providers for onwards connectivity. The data centre is part of the National Fibre Optic Gigabit Ring and has redundant internet connectivity onwards. Meanwhile, Arvin managed to join our little group of geeks and he supported Kamlesh in answering our technical questions regarding the capacities and general operation of the data centre. Visiting the NOC and its dedicated team of IT professionals was surely one of the visual highlights. Seeing their wall of screens to monitor any kind of activities on the data lines, the managed servers and the activity in and around the building was great. Even though I'm using a multi-head setup since years I cannot keep it up with that setup... ;-) But I got a couple of ideas on how to improve my work spaces here at the office. Clear advantages of hosting your e-commerce and mobile backends locally After the completely isolated NOC area we continued our Q&A session with Kamlesh and Arvin in the second server room which is dedictated to shared environments. On first thought it should be well-noted that there is lots of space for full-sized racks and therefore co-location of your own hardware. Actually, given the feedback that there will be upcoming changes in prices the facilities at the Emtel data centre are getting more and more competitive and interesting for local companies, especially small and medium enterprises. After seeing this world-class infrastructure available on the island, I'm already considering of moving one of my root servers abroad to be co-located here on the island. This would provide an improved user experience in terms of site performance and latency. This would be a good improvement, especially for upcoming e-commerce solutions for two of my local clients. Later on, we actually started the conversation of additional services that could be a catalyst for the local market in order to attract more small and medium companies to take the data centre into their evaluations regarding online activities. Until today Emtel does not provide virtualised server environments but there might be ongoing plans in the future to cover this field as well. Emtel is a mobile operator and internet connectivity provider in the first place, entering a market of managed and virtualised server infrastructures including capacities in terms of cloud storage and computing are rather new and there is a continuous learning curve at Emtel, too. You cannot just jump into a new market and see how it works out... And I appreciate Emtel's approach towards a solid fundament and then building new services on top of that. Emtel as a future one-stop-shop service provider for all your internet and telecommunications needs. Emtel's promotional video about their TIER 3 data centre in Arsenal, Mauritius More details are thoroughly described in Emtel's brochure of their data centre. Check out their PDF document here. Thanks for this opportunity Visiting and walking through the Emtel data centre for more than 2 hours was a great experience. As representative of the Mauritius Software Craftsmanship Community (MSCC) I would like to thank anyone at Emtel involved in the process of making it happen, and especially to Arvin Lockee and Kamlesh Bokhoree for their time and patience in explaining the infrastructure and answering all the endless questions from our members. Thank You!

    Read the article

  • XNA 2D Collision with specific tiles

    - by zenzero
    I am new to game programming and to these sites for help. I am making a 2D game but I can't seem to get the collision between my character and certain tiles. I have a map filled with grass tiles and water tiles and I want to keep my character from walking on the water tiles. I have a Tiles class that I use so that the tiles are objects and also has the collision method in it, a TileEngine class used create the map and it also holds a list of Tiles, and the class James which is for my character. I also have a Camera class that centers the camera on my character if that has anything to do with the problem. The character's movement is intended to be restricted to 4 directions(up, down, left, right). As an extra note, the bottom right water tile does have collision, but the collision does not occur for any of the other water tiles. Here is my TileEngine class using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; namespace Test2DGame2 { class TileEngine : Microsoft.Xna.Framework.Game { //makes a list of Tiles objects public List<Tiles> tilesList = new List<Tiles>(); public TileEngine() {} public static int tileWidth = 64; public static int tileHeight = 64; public int[,] map = { {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,}, }; public void drawMap(SpriteBatch spriteBatch) { for (int y = 0; y < map.GetLength(0); y++) { for (int x = 0; x < map.GetLength(1); x++) { //make a Rectangle tilesList[map[y, x]].rectangle = new Rectangle(x * tileWidth, y * tileHeight, tileWidth, tileHeight); //draw the Tiles objects spriteBatch.Draw(tilesList[map[y, x]].texture, tilesList[map[y, x]].rectangle, Color.White); } } } } } Here is my Tiles class using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; namespace Test2DGame2 { class Tiles { public Texture2D texture; public Rectangle rectangle; public Tiles(Texture2D texture) { this.texture = texture; } //check to see if james collides with the tile from the right side public void rightCollision(James james) { if (james.GetBounds().Intersects(rectangle)) { james.position.X = rectangle.Left - james.front.Width; } } } } I have a method for rightCollision because I could only figure out how to get the collisions from specifying directions. and here is the James class for my character using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; namespace Test2DGame2 { class James { public Texture2D front; public Texture2D back; public Texture2D left; public Texture2D right; public Vector2 center; public Vector2 position; public James(Texture2D front) { position = new Vector2(0, 0); this.front = front; center = new Vector2(front.Width / 2, front.Height / 2); } public James(Texture2D front, Vector2 newPosition) { this.front = front; position = newPosition; center = new Vector2(front.Width / 2, front.Height / 2); } public void move(GameTime gameTime) { KeyboardState keyboard = Keyboard.GetState(); float SCALE = 20.0f; float speed = gameTime.ElapsedGameTime.Milliseconds / 100.0f; if (keyboard.IsKeyDown(Keys.Up)) { position.Y -=speed * SCALE; } else if (keyboard.IsKeyDown(Keys.Down)) { position.Y += speed * SCALE; } else if (keyboard.IsKeyDown(Keys.Left)) { position.X -= speed * SCALE; } else if (keyboard.IsKeyDown(Keys.Right)) { position.X += speed * SCALE; } } public void draw(SpriteBatch spriteBatch) { spriteBatch.Draw(front, position, null, Color.White, 0, center, 1.0f, SpriteEffects.None, 0.0f); } //get the boundingbox for James public Rectangle GetBounds() { return new Rectangle( (int)position.X, (int)position.Y, front.Width, front.Height); } } }

    Read the article

  • Movement prediction for non-shooters

    - by ShadowChaser
    I'm working on an isometric 2D game with moderate-scale multiplayer, approximately 20-30 players connected at once to a persistent server. I've had some difficulty getting a good movement prediction implementation in place. Physics/Movement The game doesn't have a true physics implementation, but uses the basic principles to implement movement. Rather than continually polling input, state changes (ie/ mouse down/up/move events) are used to change the state of the character entity the player is controlling. The player's direction (ie/ north-east) is combined with a constant speed and turned into a true 3D vector - the entity's velocity. In the main game loop, "Update" is called before "Draw". The update logic triggers a "physics update task" that tracks all entities with a non-zero velocity uses very basic integration to change the entities position. For example: entity.Position += entity.Velocity.Scale(ElapsedTime.Seconds) (where "Seconds" is a floating point value, but the same approach would work for millisecond integer values). The key point is that no interpolation is used for movement - the rudimentary physics engine has no concept of a "previous state" or "current state", only a position and velocity. State Change and Update Packets When the velocity of the character entity the player is controlling changes, a "move avatar" packet is sent to the server containing the entity's action type (stand, walk, run), direction (north-east), and current position. This is different from how 3D first person games work. In a 3D game the velocity (direction) can change frame to frame as the player moves around. Sending every state change would effectively transmit a packet per frame, which would be too expensive. Instead, 3D games seem to ignore state changes and send "state update" packets on a fixed interval - say, every 80-150ms. Since speed and direction updates occur much less frequently in my game, I can get away with sending every state change. Although all of the physics simulations occur at the same speed and are deterministic, latency is still an issue. For that reason, I send out routine position update packets (similar to a 3D game) but much less frequently - right now every 250ms, but I suspect with good prediction I can easily boost it towards 500ms. The biggest problem is that I've now deviated from the norm - all other documentation, guides, and samples online send routine updates and interpolate between the two states. It seems incompatible with my architecture, and I need to come up with a better movement prediction algorithm that is closer to a (very basic) "networked physics" architecture. The server then receives the packet and determines the players speed from it's movement type based on a script (Is the player able to run? Get the player's running speed). Once it has the speed, it combines it with the direction to get a vector - the entity's velocity. Some cheat detection and basic validation occurs, and the entity on the server side is updated with the current velocity, direction, and position. Basic throttling is also performed to prevent players from flooding the server with movement requests. After updating its own entity, the server broadcasts an "avatar position update" packet to all other players within range. The position update packet is used to update the client side physics simulations (world state) of the remote clients and perform prediction and lag compensation. Prediction and Lag Compensation As mentioned above, clients are authoritative for their own position. Except in cases of cheating or anomalies, the client's avatar will never be repositioned by the server. No extrapolation ("move now and correct later") is required for the client's avatar - what the player sees is correct. However, some sort of extrapolation or interpolation is required for all remote entities that are moving. Some sort of prediction and/or lag-compensation is clearly required within the client's local simulation / physics engine. Problems I've been struggling with various algorithms, and have a number of questions and problems: Should I be extrapolating, interpolating, or both? My "gut feeling" is that I should be using pure extrapolation based on velocity. State change is received by the client, client computes a "predicted" velocity that compensates for lag, and the regular physics system does the rest. However, it feels at odds to all other sample code and articles - they all seem to store a number of states and perform interpolation without a physics engine. When a packet arrives, I've tried interpolating the packet's position with the packet's velocity over a fixed time period (say, 200ms). I then take the difference between the interpolated position and the current "error" position to compute a new vector and place that on the entity instead of the velocity that was sent. However, the assumption is that another packet will arrive in that time interval, and it's incredibly difficult to "guess" when the next packet will arrive - especially since they don't all arrive on fixed intervals (ie/ state changes as well). Is the concept fundamentally flawed, or is it correct but needs some fixes / adjustments? What happens when a remote player stops? I can immediately stop the entity, but it will be positioned in the "wrong" spot until it moves again. If I estimate a vector or try to interpolate, I have an issue because I don't store the previous state - the physics engine has no way to say "you need to stop after you reach position X". It simply understands a velocity, nothing more complex. I'm reluctant to add the "packet movement state" information to the entities or physics engine, since it violates basic design principles and bleeds network code across the rest of the game engine. What should happen when entities collide? There are three scenarios - the controlling player collides locally, two entities collide on the server during a position update, or a remote entity update collides on the local client. In all cases I'm uncertain how to handle the collision - aside from cheating, both states are "correct" but at different time periods. In the case of a remote entity it doesn't make sense to draw it walking through a wall, so I perform collision detection on the local client and cause it to "stop". Based on point #2 above, I might compute a "corrected vector" that continually tries to move the entity "through the wall" which will never succeed - the remote avatar is stuck there until the error gets too high and it "snaps" into position. How do games work around this?

    Read the article

  • Curing the Database-Application mismatch

    - by Phil Factor
    If an application requires access to a database, then you have to be able to deploy it so as to be version-compatible with the database, in phase. If you can deploy both together, then the application and database must normally be deployed at the same version in which they, together, passed integration and functional testing.  When a single database supports more than one application, then the problem gets more interesting. I’ll need to be more precise here. It is actually the application-interface definition of the database that needs to be in a compatible ‘version’.  Most databases that get into production have no separate application-interface; in other words they are ‘close-coupled’.  For this vast majority, the whole database is the application-interface, and applications are free to wander through the bowels of the database scot-free.  If you’ve spurned the perceived wisdom of application architects to have a defined application-interface within the database that is based on views and stored procedures, any version-mismatch will be as sensitive as a kitten.  A team that creates an application that makes direct access to base tables in a database will have to put a lot of energy into keeping Database and Application in sync, to say nothing of having to tackle issues such as security and audit. It is not the obvious route to development nirvana. I’ve been in countless tense meetings with application developers who initially bridle instinctively at the apparent restrictions of being ‘banned’ from the base tables or routines of a database.  There is no good technical reason for needing that sort of access that I’ve ever come across.  Everything that the application wants can be delivered via a set of views and procedures, and with far less pain for all concerned: This is the application-interface.  If more than zero developers are creating a database-driven application, then the project will benefit from the loose-coupling that an application interface brings. What is important here is that the database development role is separated from the application development role, even if it is the same developer performing both roles. The idea of an application-interface with a database is as old as I can remember. The big corporate or government databases generally supported several applications, and there was little option. When a new application wanted access to an existing corporate database, the developers, and myself as technical architect, would have to meet with hatchet-faced DBAs and production staff to work out an interface. Sure, they would talk up the effort involved for budgetary reasons, but it was routine work, because it decoupled the database from its supporting applications. We’d be given our own stored procedures. One of them, I still remember, had ninety-two parameters. All database access was encapsulated in one application-module. If you have a stable defined application-interface with the database (Yes, one for each application usually) you need to keep the external definitions of the components of this interface in version control, linked with the application source,  and carefully track and negotiate any changes between database developers and application developers.  Essentially, the application development team owns the interface definition, and the onus is on the Database developers to implement it and maintain it, in conformance.  Internally, the database can then make all sorts of changes and refactoring, as long as source control is maintained.  If the application interface passes all the comprehensive integration and functional tests for the particular version they were designed for, nothing is broken. Your performance-testing can ‘hang’ on the same interface, since databases are judged on the performance of the application, not an ‘internal’ database process. The database developers have responsibility for maintaining the application-interface, but not its definition,  as they refactor the database. This is easily tested on a daily basis since the tests are normally automated. In this setting, the deployment can proceed if the more stable application-interface, rather than the continuously-changing database, passes all tests for the version of the application. Normally, if all goes well, a database with a well-designed application interface can evolve gracefully without changing the external appearance of the interface, and this is confirmed by integration tests that check the interface, and which hopefully don’t need to be altered at all often.  If the application is rapidly changing its ‘domain model’  in the light of an increased understanding of the application domain, then it can change the interface definitions and the database developers need only implement the interface rather than refactor the underlying database.  The test team will also have to redo the functional and integration tests which are, of course ‘written to’ the definition.  The Database developers will find it easier if these tests are done before their re-wiring  job to implement the new interface. If, at the other extreme, an application receives no further development work but survives unchanged, the database can continue to change and develop to keep pace with the requirements of the other applications it supports, and needs only to take care that the application interface is never broken. Testing is easy since your automated scripts to test the interface do not need to change. The database developers will, of course, maintain their own source control for the database, and will be likely to maintain versions for all major releases. However, this will not need to be shared with the applications that the database servers. On the other hand, the definition of the application interfaces should be within the application source. Changes in it have to be subject to change-control procedures, as they will require a chain of tests. Once you allow, instead of an application-interface, an intimate relationship between application and database, we are in the realms of impedance mismatch, over and above the obvious security problems.  Part of this impedance problem is a difference in development practices. Whereas the application has to be regularly built and integrated, this isn’t necessarily the case with the database.  An RDBMS is inherently multi-user and self-integrating. If the developers work together on the database, then a subsequent integration of the database on a staging server doesn’t often bring nasty surprises. A separate database-integration process is only needed if the database is deliberately built in a way that mimics the application development process, but which hampers the normal database-development techniques.  This process is like demanding a official walking with a red flag in front of a motor car.  In order to closely coordinate databases with applications, entire databases have to be ‘versioned’, so that an application version can be matched with a database version to produce a working build without errors.  There is no natural process to ‘version’ databases.  Each development project will have to define a system for maintaining the version level. A curious paradox occurs in development when there is no formal application-interface. When the strains and cracks happen, the extra meetings, bureaucracy, and activity required to maintain accurate deployments looks to IT management like work. They see activity, and it looks good. Work means progress.  Management then smile on the design choices made. In IT, good design work doesn’t necessarily look good, and vice versa.

    Read the article

  • Small adventure game

    - by Nick Rosencrantz
    I'm making a small adventure game where the player can walk through Dungeons and meet scary characters: The whole thing is 20 java classes and I'm making this a standalone frame while it could very well be an applet I don't want to make another applet since I might want to recode this in C/C++ if the game or game engine turns out a success. The engine is the most interesting part of the game, it controls players and computer-controlled characters such as Zombies, Reptile Warriors, Trolls, Necromancers, and other Persons. These persons can sleep or walk around in the game and also pick up and move things. I didn't add many things so I suppose that is the next thing to do is to add things that can get used now that I already added many different types of walking persons. What do you think I should add and do with things in the game? The things I have so far is: package adventure; /** * The data type for things. Subclasses will be create that takes part of the story */ public class Thing { /** * The name of the Thing. */ public String name; /** * @param name The name of the Thing. */ Thing( String name ) { this.name = name; } } public class Scroll extends Thing { Scroll (String name) { super(name); } } class Key extends Thing { Key (String name) { super(name); } } The key is the way to win the game if you figure our that you should give it to a certain person and the scroll can protect you from necromancers and trolls. If I make this game more Dungeons and Dragons-inspired, do you think will be any good? Any other ideas that you think I could use here? The Threadwhich steps time forward and wakes up persons is called simulation. Do you think I could do something more advanced with this class? package adventure; class Simulation extends Thread { private PriorityQueue Eventqueue; Simulation() { Eventqueue = new PriorityQueue(); start(); } public void wakeMeAfter(Wakeable SleepingObject, double time) { Eventqueue.enqueue(SleepingObject, System.currentTimeMillis()+time); } public void run() { while(true) { try { sleep(5); //Sov i en halv sekund if (Eventqueue.getFirstTime() <= System.currentTimeMillis()) { ((Wakeable)Eventqueue.getFirst()).wakeup(); Eventqueue.dequeue(); } } catch (InterruptedException e ) { } } } } And here is the class that makes up the actual world: package adventure; import java.awt.*; import java.net.URL; /** * Subklass to World that builds up the Dungeon World. */ public class DungeonWorld extends World { /** * * @param a Reference to adventure game. * */ public DungeonWorld(Adventure a) { super ( a ); // Create all places createPlace( "Himlen" ); createPlace( "Stairs3" ); createPlace( "IPLab" ); createPlace( "Dungeon3" ); createPlace( "Stairs5" ); createPlace( "C2M2" ); createPlace( "SANS" ); createPlace( "Macsal" ); createPlace( "Stairs4" ); createPlace( "Dungeon2" ); createPlace( "Datorsalen" ); createPlace( "Dungeon");//, "Ljushallen.gif" ); createPlace( "Cola-automaten", "ColaAutomat.gif" ); createPlace( "Stairs2" ); createPlace( "Fable1" ); createPlace( "Dungeon1" ); createPlace( "Kulverten" ); // Create all connections between places connect( "Stairs3", "Stairs5", "Down", "Up" ); connect( "Dungeon3", "SANS", "Down", "Up" ); connect( "Dungeon3", "IPLab", "West", "East" ); connect( "IPLab", "Stairs3", "West", "East" ); connect( "Stairs5", "Stairs4", "Down", "Up" ); connect( "Macsal", "Stairs5", "South", "Norr" ); connect( "C2M2", "Stairs5", "West", "East" ); connect( "SANS", "C2M2", "West", "East" ); connect( "Stairs4", "Dungeon", "Down", "Up" ); connect( "Datorsalen", "Stairs4", "South", "Noth" ); connect( "Dungeon2", "Stairs4", "West", "East" ); connect( "Dungeon", "Stairs2", "Down", "Up" ); connect( "Dungeon", "Cola-automaten", "South", "North" ); connect( "Stairs2", "Kulverten", "Down", "Up" ); connect( "Stairs2", "Fable1", "East", "West" ); connect( "Fable1", "Dungeon1", "South", "North" ); // Add things // --- Add new things here --- getPlace("Cola-automaten").addThing(new CocaCola("Ljummen cola")); getPlace("Cola-automaten").addThing(new CocaCola("Avslagen Cola")); getPlace("Cola-automaten").addThing(new CocaCola("Iskall Cola")); getPlace("Cola-automaten").addThing(new CocaCola("Cola Light")); getPlace("Cola-automaten").addThing(new CocaCola("Cuba Cola")); getPlace("Stairs4").addThing(new Scroll("Scroll")); getPlace("Dungeon3").addThing(new Key("Key")); Simulation sim = new Simulation(); // Load images to be used as appearance-parameter for persons Image studAppearance = owner.loadPicture( "Person.gif" ); Image asseAppearance = owner.loadPicture( "Asse.gif" ); Image trollAppearance = owner.loadPicture( "Loke.gif" ); Image necromancerAppearance = owner.loadPicture( "Necromancer.gif" ); Image skeletonAppearance = owner.loadPicture( "Reptilewarrior.gif" ); Image reptileAppearance = owner.loadPicture( "Skeleton.gif" ); Image zombieAppearance = owner.loadPicture( "Zombie.gif" ); // --- Add new persons here --- new WalkingPerson(sim, this, "Peter", studAppearance); new WalkingPerson(sim, this, "Zombie", zombieAppearance ); new WalkingPerson(sim, this, "Zombie", zombieAppearance ); new WalkingPerson(sim, this, "Skeleton", skeletonAppearance ); new WalkingPerson(sim, this, "John", studAppearance ); new WalkingPerson(sim, this, "Skeleton", skeletonAppearance ); new WalkingPerson(sim, this, "Skeleton", skeletonAppearance ); new WalkingPerson(sim, this, "Skeleton", skeletonAppearance ); new WalkingPerson(sim, this, "Sean", studAppearance ); new WalkingPerson(sim, this, "Reptile", reptileAppearance ); new LabAssistant(sim, this, "Kate", asseAppearance); new LabAssistant(sim, this, "Jenna", asseAppearance); new Troll(sim, this, "Troll", trollAppearance); new Necromancer(sim, this, "Necromancer", necromancerAppearance); } /** * * The place where persons are placed by default * *@return The default place. * */ public Place defaultPlace() { return getPlace( "Datorsalen" ); } private void connect( String p1, String p2, String door1, String door2) { Place place1 = getPlace( p1 ); Place place2 = getPlace( p2 ); place1.addExit( door1, place2 ); place2.addExit( door2, place1 ); } } Thanks

    Read the article

  • The challenge of communicating externally with IRM secured content

    - by Simon Thorpe
    I am often asked by customers about how they handle sending IRM secured documents to external parties. Their concern is that using IRM to secure sensitive information they need to share outside their business, is troubled with the inability for third parties to install the software which enables them to gain access to the information. It is a very legitimate question and one i've had to answer many times in the past 10 years whilst helping customers plan successful IRM deployments. The operating system does not provide the required level of content security The problem arises from what IRM delivers, persistent security to your sensitive information where ever it resides and whenever it is in use. Oracle IRM gives customers an array of features that help ensure sensitive information in an IRM document or email is always protected and only accessed by authorized users using legitimate applications. Examples of such functionality are; Control of the clipboard, either by disabling completely in the opened document or by allowing the cut and pasting of information between secured IRM documents but not into insecure applications. Protection against programmatic access to the document. Office documents and PDF documents have the ability to be accessed by other applications and scripts. With Oracle IRM we have to protect against this to ensure content cannot be leaked by someone writing a simple program. Securing of decrypted content in memory. At some point during the process of opening and presenting a sealed document to an end user, we must decrypt it and give it to the application (Adobe Reader, Microsoft Word, Excel etc). This process must be secure so that someone cannot simply get access to the decrypted information. The operating system alone just doesn't have the functionality to deliver these types of features. This is why for every IRM technology there must be some extra software installed and typically this software requires administrative rights to do so. The fact is that if you want to have very strong security and access control over a document you are going to send to someone who is beyond your network infrastructure, there must be some software to provide that functionality. Simple installation with Oracle IRM The software used to control access to Oracle IRM sealed content is called the Oracle IRM Desktop. It is a small, free piece of software roughly about 12mb in size. This software delivers functionality for everything a user needs to work with an Oracle IRM solution. It provides the functionality for all formats we support, the storage and transparent synchronization of user rights and unique to Oracle, the ability to search inside sealed files stored on the local computer. In Oracle we've made every technical effort to ensure that installing this software is a simple as possible. In situations where the user's computer is part of the enterprise, this software is typically deployed using existing technologies such as Systems Management Server from Microsoft or by using Active Directory Group Policies. However when sending sealed content externally, you cannot automatically install software on the end users machine. You need to rely on them to download and install themselves. Again we've made every effort for this manual install process to be as simple as we can. Starting with the small download size of the software itself to the simple installation process, most end users are able to install and access sealed content very quickly. You can see for yourself how easily this is done by walking through our free and easy self service demonstration of using sealed content. How to handle objections and ensure there is value However the fact still remains that end users may object to installing, or may simply be unable to install the software themselves due to lack of permissions. This is often a problem with any technology that requires specialized software to access a new type of document. In Oracle, over the past 10 years, we've learned many ways to get over this barrier of getting software deployed by external users. First and I would say of most importance, is the content MUST have some value to the person you are asking to install software. Without some type of value proposition you are going to find it very difficult to get past objections to installing the IRM Desktop. Imagine if you were going to secure the weekly campus restaurant menu and send this to contractors. Their initial response will be, "why on earth are you asking me to download some software just to access your menu!?". A valid objection... there is no value to the user in doing this. Now consider the scenario where you are sending one of your contractors their employment contract which contains their address, social security number and bank account details. Are they likely to take 5 minutes to install the IRM Desktop? You bet they are, because there is real value in doing so and they understand why you are doing it. They want their personal information to be securely handled and a quick download and install of some software is a small task in comparison to dealing with the loss of this information. Be clear in communicating this value So when sending sealed content to people externally, you must be clear in communicating why you are using an IRM technology and why they need to install some software to access the content. Do not try and avoid the issue, you must be clear and upfront about it. In doing so you will significantly reduce the "I didn't know I needed to do this..." responses and also gain respect for being straight forward. One customer I worked with, 6 months after the initial deployment of Oracle IRM, called me panicking that the partner they had started to share their engineering documents with refused to install any software to access this highly confidential intellectual property. I explained they had to communicate to the partner why they were doing this. I told them to go back with the statement that "the company takes protecting its intellectual property seriously and had decided to use IRM to control access to engineering documents." and if the partner didn't respect this decision, they would find another company that would. The result? A few days later the partner had made the Oracle IRM Desktop part of their approved list of software in the company. Companies are successful when sending sealed content to third parties We have many, many customers who send sensitive content to third parties. Some customers actually sell access to Oracle IRM protected content and therefore 99% of their users are external to their business, one in particular has sold content to hundreds of thousands of external users. Oracle themselves use the technology to secure M&A documents, payroll data and security assessments which go beyond the traditional enterprise security perimeter. Pretty much every company who deploys Oracle IRM will at some point be sending those documents to people outside of the company, these customers must be successful otherwise Oracle IRM wouldn't be successful. Because our software is used by a wide variety of companies, some who use it to sell content, i've often run into people i'm sharing a sealed document with and they already have the IRM Desktop installed due to accessing content from another company. The future In summary I would say that yes, this is a hurdle that many customers are concerned about but we see much evidence that in practice, people leap that hurdle with relative ease as long as they are good at communicating the value of using IRM and also take measures to ensure end users can easily go through the process of installation. We are constantly developing new ideas to reducing this hurdle and maybe one day the operating systems will give us enough rich security functionality to have no software installation. Until then, Oracle IRM is by far the easiest solution to balance security and usability for your business. If you would like to evaluate it for yourselves, please contact us.

    Read the article

  • Alcatel-Lucent: Enterprise 2.0: The Top 5 Things I would Do Over

    - by Kellsey Ruppel
    Happy Monday! Does anyone else feel as if the weekend went entirely too quickly? At least for those of us in the United States, we have the 4th of July Holiday next week to look forward to This week on the blog, we are going to focus on "WebCenter by Example" and highlight best practices from customers and partners. I recently came across this article and I think this is a great example of how we can learn from one another when it comes to social collaboration adoption. Do you agree with Jem? What things or best practices have you learned in your organizations?  By Jem Janik, Enterprise community manager, Alcatel-Lucent  Not so long ago, Engage, the Alcatel-Lucent employee social network and collaboration platform, celebrated its third birthday. With more than 25,000 members actively interacting each month, Engage has been a big enough success that it’s been the subject of external articles, and often those of us who helped launch it will go out and speak about what aspects contributed to that success. Hindsight is still 20/20 and what it takes to successfully launch an enterprise 2.0 community is fairly well-known now.  Today I want to tell you what I suspect you really want to know about.  As the enterprise community manager for Engage, after three years in, what are the top 5 things I wish we (and I mostly mean me) could do over? #5 Define your analytics solution from the start There is so much to do when you launch a community and initially growing it without complete chaos is quite a task.  It doesn’t take too long to get to a point where you want to focus your continued efforts in growing company collaboration.  Do people truly talk across regional boundaries or have we shifted siloed conversations to a new platform.  Is there one organization that doesn’t interact with another? If you are lucky you’ll have someone in your community team well versed in the world of databases and SQL queries, but it takes time to figure out what backend analytics data actually means. Professional support can be expensive and it may be hard to justify later as it typically has the community manager as the only main customer.  Figure out what you think you’ll want to know and how to get it early on. The sooner the better even if it doesn’t seem that critical at the time. #4 Lobbies guide you to the right places One piece of feedback that comes up more and more as we keep growing Engage is it’s hard to find stuff, or new people are not sure where to start. Something we’re doing now is defining some general topic areas of interest to be like “lobbies” into the platform and some common hashtags to go with them. I liken this to walking into a large medical or professional building for the first time.  There are hundreds of offices, and you look to a sign in the lobby to get guided to the right place for you.  We’re building that sign for members now, but again we missed the boat as the majority of the company has had their initial Engage experience. #3 Clean up, clean up, clean up Knowledge work and folksonomies are messy! The day we opened the doors to Engage I would have said we should keep everything ever created in Engage with an argument that it was a window into our collective knowledge so nothing should go.  Well, 6000+ groups and 200,000+ pieces of content later, I’ve changed my mind.  As previously mentioned, with too much “stuff” the system can be overwhelming to new members and it makes it harder to get what you’re looking for.   Do we need that help document about a tool we no longer have? NO!  Do we need that group that had 1 document and 2 discussions in the last two years? NO! Should we only have one group about a given topic instead of 4?  YES! Last fall, Engage defined a cleanup process for groups not used for a long time.  We also formed a volunteer cleaning army who are extra eyes on the hunt for “stuff” that should be updated, merged, or deleted.  It’s better late than never, but in line with what’s becoming a theme I wish these efforts had started earlier. #2 Communications & local community management One of the most important aspects of my job is to make sure people who should be talking to each other are actually doing it.  Connecting people to the other people they should know, the groups they should join, a piece of content that shouldn’t be missed.   I have worked both inside and outside of communications teams, and they are the best informed people in your company.  They know when something big is coming, how it impacts employees, how it fits with strategy, who else knows more, etc.  Having communications professionals who are power users can help scale up community management because they are already so well connected.  They also need to have the platform skills to pay attention without suffering email overload, how to grab someone’s attention, etc.  I wish I’d had figured this out much earlier.  If I had I would have groomed more communications colleagues into advocates and power members right at the start. #1 Grooming advocates vs. natural advocates I’ve just alluded to this above already. The very best advocates are those who naturally embrace your platform and automatically start to see new ways to work within it.  Those advocates seem to come out of the woodwork naturally since some of them are early adopters.  Not surprisingly, our best advocates today are those same people who were willing to come kick the tires when the community was completely empty.  Unfortunately, we didn’t get a global spread of those natural advocates.  I did ask around when we first launched for other people who might be good candidates, but didn’t push too hard as there were so many other things to get ready.  That was a mistake.  If I could get a redo I would have formally asked for people to be assigned where there were gaps and groomed them into an advocate.  Today as we find new advocates to fill the gaps, people are hesitant as the initial set has three years of practice are ahead of the curve power members; it definitely would have been easier earlier on. As fairly early adopters to corporate scale enterprise collaboration, there hasn’t been a roadmap to follow as we’ve grown Engage, which is part of the fun! It’s clear a lot of issues are more easily tackled the earlier you identify and begin to correct them, and I’ve identified the main five I wish I could redo.  In the spirit of collaboration, I hope someone else learns from my mistakes! View the original article by Jem here. 

    Read the article

  • Webcast Q&A: ING on How to Scale Role Management and Compliance

    - by Tanu Sood
    Thanks to all who attended the live webcast we hosted on ING: Scaling Role Management and Access Certifications to Thousands of Applications on Wed, April 11th. Those of you who couldn’t join us, the webcast replay is now available. Many thanks to our guest speaker, Mark Robison, Enterprise Architect at ING for walking us through ING’s drivers and rationale for the platform approach, the phased implementation strategy, results & metrics, roadmap and recommendations. We greatly appreciate the insight he shared with us all on the deployment synergies between Oracle Identity Manager (OIM) and Oracle Identity Analytics (OIA) to enforce streamlined user and role management and scalable compliance. Mark was also kind enough to walk us through specific solutions features that helped ING manage the problem of role explosion and implement closed loop remediation. Our host speaker, Neil Gandhi, Principal Product Manager, Oracle rounded off the presentation by discussing common use cases and deployment scenarios we see organizations implement to automate user/identity administration and enforce closed-loop scalable compliance. Neil also called out the specific features in Oracle Identity Analytics 11gR1 that cater to expediting and streamlining compliance processes such as access certifications. While we tackled a few questions during the webcast, we have captured the responses to those that we weren’t able to get to here; our sincere thanks to Mark Robison for taking the time to respond to questions specific to ING’s implementation and strategy. Q. Did you include business friendly entitlment descriptions, or is the business seeing application descriptors A. We include very business friendly descriptions.  The OIA tool has the facility to allow this. Q. When doing attestation on job change, who is in the workflow to review and confirm that the employee should continue to have access? Is that a best practice?   A. The new and old manager  are in the workflow.  The tool can check for any Separation of Duties (SOD) violations with both having similiar accesses.  It may not be a best practice, but it is a reality of doing your old and new job for a transition period on a transfer. Q. What versions of OIM and OIA are being used at ING?   A. OIM 11gR1 and OIA 11gR1; the very latest versions available. Q. Are you using an entitlements / role catalog?   A. Yes. We use both roles and entitlements. Q. What specific unexpected benefits did the Identity Warehouse provide ING?   A. The most unanticipated was to help Legal Hold identify user ID's in the various applications.   Other benefits included providing a one stop shop for all aggregated ID information. Q. How fine grained are your application and entitlements? Did OIA, OIM support that level of granularity?   A. We have some very fine grained entitlements, but we role this up into approved Roles to allow for easier management.   For managing very fine grained entitlements, Oracle offers the Oracle Entitlement Server.  We currently do not own this software but are considering it. Q. Do you allow any individual access or is everything truly role based?   A. We are a hybrid environment with roles and individual positive and negative entitlements Q. Did you use an Agile methodology like scrum to deliver functionality during your project? A. We started with waterfall, but used an agile approach to provide benefits after the initial implementation Q. How did you handle rolling out the standard ID format to existing users? A. We just used the standard IDs for new users.  We have not taken on a project to address the existing nonstandard IDs. Q. To avoid role explosion, how do you deal with apps that require more than a couple of entitlement TYPES? For example, an app may have different levels of access and it may need to know the user's country/state to associate them with particular customers.   A. We focus on the functional user and craft the role around their daily job requirements.  The role captures the required application entitlements.  To keep role explosion down, we use role mining in OIA and also meet and interview the business.  It is an iterative process to get role consensus. Q. Great presentation! How many rounds of Certifications has ING performed so far?  A. Around 7 quarters and constant certifications on transfer. Q. Did you have executive support from the top down   A. Yes  The executive support was key to our success. Q. For your cloud instance are you using OIA or OIM as SaaS?  A. No.  We are just provisioning and deprovisioning to various Cloud providers.  (Service Now is an example) Q. How do you ensure a role owner does not get more priviliges as are intended and thus violates another role, e,g, a DBA Roles should not get tor rigt to run somethings as root, as this would affect the root role? A. We have SOD  checks.  Also all Roles are initially approved by external audit and the role owners have to certify the roles and any changes Q. What is your ratio of employees to roles?   A. We are still in process going through our various lines of business, so I do not have a final ratio.  From what we have seen, the ratio varies greatly depending on the Line of Business and the diversity of Job Functions.  For standardized lines of business such as call centers, the ratio is very good where we can have a single role that covers many employees.  For specialized lines of business like treasury, it can be one or two people per role. Q. Is ING using Oracle On Demand service ?   A. No Q. Do you have to implement or migrate to OIM in order to get the Identity Warehouse, or can OIA provide the identity warehouse as well if you haven't reached OIM yet? A. No, OIM deployment is not required to implement OIA’s Identity Warehouse but as you heard during the webcast, there are tremendous deployment synergies in deploying both OIA and OIM together. Q. When is the Security Governor product coming out? A. Oracle Security Governor for Healthcare is available today. Hope you enjoyed the webcast and we look forward to having you join us for the next webcast in the Customers Talk: Identity as a Platform webcast series: Toyota: Putting Customers First – Identity Platform as a Business Enabler Wednesday, May 16th at 10 am PST/ 1 pm EST Register Today You can also register for a live event at a city near you where Aberdeen’s Derek Brink will discuss the survey results from the recently published report “Analyzing Platform vs. Point Solution Approach in Identity”. And, you can do a quick (& free)  online assessment of your identity programs by benchmarking it against the 160 organizations surveyed  in the Aberdeen report, compliments of Oracle. Here’s the slide deck from our ING webcast: ING webcast platform View more presentations from OracleIDM

    Read the article

  • Mounting NAS share: Bad Address

    - by Korben
    I've faced to the problem that can't solve. Hope you can help me with it. I have a storage QNAP TS-459U, with it's own Linux, and 'massive1' folder shared, which I need to mount to my Debian server. They are connected by regular patch cord. Debian server has two network interfaces - eth0 and eth1. eth0 is for Internet, eth1 is for QNAP. So, I'm saying this: mount -t cifs //169.254.100.100/massive1/ /mnt/storage -o user=admin , where 169.254.100.100 is an IP of QNAP's interface. The result I get (after entering password): mount error(14): Bad address Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) Tried: mount.cifs, smbmount, with '/' at the end of the network share and without it, and many other variations of that command. And always its: mount error(14): Bad address Funny thing is when I was in Data Center, I had connected my netbook to QNAP by the same scheme (with Fedora 16 on it), and it connected without any problems, I could read/write files on the QNAP's NAS share! So I'm really stuck with the Debian. I can't undrestand where's the difference with Fedora, making this error. Yeah, I've used Google. Couldn't find any useful info. Ping to the QNAP's IP is working, I can log into QNAP's Linux by ssh, telnet on 139's port is working. This is network interface configuration I use in Debian: IP: 169.254.100.1 Netmask: 255.255.0.0 The only diffence in connecting to Fedora and Debian is that in Fedora I've added gateway - 169.254.100.129, but ping to this IP is not working, so I think it's not necessary at all. P.S. ~# cat /etc/debian_version wheezy/sid ~# uname -a Linux host 2.6.32-5-openvz-amd64 #1 SMP Mon Mar 7 22:25:57 UTC 2011 x86_64 GNU/Linux ~# smbtree WORKGROUP \\HOST host server \\HOST\IPC$ IPC Service (host server) \\HOST\print$ Printer Drivers NAS \\MASSIVE1 NAS Server \\MASSIVE1\IPC$ IPC Service (NAS Server) \\MASSIVE1\massive1 \\MASSIVE1\Network Recycle Bin 1 [RAID5 Disk Volume: Drive 1 2 3 4] \\MASSIVE1\Public System default share \\MASSIVE1\Usb System default share \\MASSIVE1\Web System default share \\MASSIVE1\Recordings System default share \\MASSIVE1\Download System default share \\MASSIVE1\Multimedia System default share Please, help me with solving this strange issue. Thanks before.

    Read the article

  • dbus dependency with yum

    - by Hengjie
    Whenever, I try and run yum update I get the following error: [root@server ~]# yum update Loaded plugins: dellsysid, fastestmirror Loading mirror speeds from cached hostfile * base: mirror01.idc.hinet.net * extras: mirror01.idc.hinet.net * rpmforge: fr2.rpmfind.net * updates: mirror01.idc.hinet.net Excluding Packages in global exclude list Finished Setting up Update Process Resolving Dependencies --> Running transaction check ---> Package NetworkManager.x86_64 1:0.7.0-13.el5 set to be updated ---> Package NetworkManager-glib.x86_64 1:0.7.0-13.el5 set to be updated ---> Package SysVinit.x86_64 0:2.86-17.el5 set to be updated ---> Package acl.x86_64 0:2.2.39-8.el5 set to be updated ---> Package acpid.x86_64 0:1.0.4-12.el5 set to be updated ---> Package apr.x86_64 0:1.2.7-11.el5_6.5 set to be updated ---> Package aspell.x86_64 12:0.60.3-12 set to be updated ---> Package audit.x86_64 0:1.8-2.el5 set to be updated ---> Package audit-libs.x86_64 0:1.8-2.el5 set to be updated ---> Package audit-libs-python.x86_64 0:1.8-2.el5 set to be updated ---> Package authconfig.x86_64 0:5.3.21-7.el5 set to be updated ---> Package autofs.x86_64 1:5.0.1-0.rc2.163.el5 set to be updated ---> Package bash.x86_64 0:3.2-32.el5 set to be updated ---> Package bind.x86_64 30:9.3.6-20.P1.el5 set to be updated ---> Package bind-libs.x86_64 30:9.3.6-20.P1.el5 set to be updated ---> Package bind-utils.x86_64 30:9.3.6-20.P1.el5 set to be updated ---> Package binutils.x86_64 0:2.17.50.0.6-20.el5 set to be updated ---> Package centos-release.x86_64 10:5-8.el5.centos set to be updated ---> Package centos-release-notes.x86_64 0:5.8-0 set to be updated ---> Package coreutils.x86_64 0:5.97-34.el5_8.1 set to be updated ---> Package cpp.x86_64 0:4.1.2-52.el5 set to be updated ---> Package cpuspeed.x86_64 1:1.2.1-10.el5 set to be updated ---> Package crash.x86_64 0:5.1.8-1.el5.centos set to be updated ---> Package cryptsetup-luks.x86_64 0:1.0.3-8.el5 set to be updated ---> Package cups.x86_64 1:1.3.7-30.el5 set to be updated ---> Package cups-libs.x86_64 1:1.3.7-30.el5 set to be updated ---> Package curl.x86_64 0:7.15.5-15.el5 set to be updated --> Processing Dependency: dbus = 1.1.2-15.el5_6 for package: dbus-libs ---> Package dbus.x86_64 0:1.1.2-16.el5_7 set to be updated ---> Package dbus-libs.x86_64 0:1.1.2-16.el5_7 set to be updated ---> Package device-mapper.x86_64 0:1.02.67-2.el5 set to be updated ---> Package device-mapper-event.x86_64 0:1.02.67-2.el5 set to be updated ---> Package device-mapper-multipath.x86_64 0:0.4.7-48.el5_8.1 set to be updated ---> Package dhclient.x86_64 12:3.0.5-31.el5 set to be updated ---> Package dmidecode.x86_64 1:2.11-1.el5 set to be updated ---> Package dmraid.x86_64 0:1.0.0.rc13-65.el5 set to be updated ---> Package dmraid-events.x86_64 0:1.0.0.rc13-65.el5 set to be updated ---> Package dump.x86_64 0:0.4b41-6.el5 set to be updated ---> Package e2fsprogs.x86_64 0:1.39-33.el5 set to be updated ---> Package e2fsprogs-devel.x86_64 0:1.39-33.el5 set to be updated ---> Package e2fsprogs-libs.x86_64 0:1.39-33.el5 set to be updated ---> Package ecryptfs-utils.x86_64 0:75-8.el5 set to be updated ---> Package file.x86_64 0:4.17-21 set to be updated ---> Package finger.x86_64 0:0.17-33 set to be updated ---> Package firstboot-tui.x86_64 0:1.4.27.9-1.el5.centos set to be updated ---> Package freetype.x86_64 0:2.2.1-28.el5_7.2 set to be updated ---> Package freetype-devel.x86_64 0:2.2.1-28.el5_7.2 set to be updated ---> Package ftp.x86_64 0:0.17-37.el5 set to be updated ---> Package gamin.x86_64 0:0.1.7-10.el5 set to be updated ---> Package gamin-python.x86_64 0:0.1.7-10.el5 set to be updated ---> Package gawk.x86_64 0:3.1.5-15.el5 set to be updated ---> Package gcc.x86_64 0:4.1.2-52.el5 set to be updated ---> Package gcc-c++.x86_64 0:4.1.2-52.el5 set to be updated ---> Package glibc.i686 0:2.5-81.el5_8.1 set to be updated ---> Package glibc.x86_64 0:2.5-81.el5_8.1 set to be updated ---> Package glibc-common.x86_64 0:2.5-81.el5_8.1 set to be updated ---> Package glibc-devel.x86_64 0:2.5-81.el5_8.1 set to be updated ---> Package glibc-headers.x86_64 0:2.5-81.el5_8.1 set to be updated ---> Package gnutls.x86_64 0:1.4.1-7.el5_8.2 set to be updated ---> Package groff.x86_64 0:1.18.1.1-13.el5 set to be updated ---> Package gtk2.x86_64 0:2.10.4-21.el5_7.7 set to be updated ---> Package gzip.x86_64 0:1.3.5-13.el5.centos set to be updated ---> Package hmaccalc.x86_64 0:0.9.6-4.el5 set to be updated ---> Package htop.x86_64 0:1.0.1-2.el5.rf set to be updated ---> Package hwdata.noarch 0:0.213.26-1.el5 set to be updated ---> Package ifd-egate.x86_64 0:0.05-17.el5 set to be updated ---> Package initscripts.x86_64 0:8.45.42-1.el5.centos set to be updated ---> Package iproute.x86_64 0:2.6.18-13.el5 set to be updated ---> Package iptables.x86_64 0:1.3.5-9.1.el5 set to be updated ---> Package iptables-ipv6.x86_64 0:1.3.5-9.1.el5 set to be updated ---> Package iscsi-initiator-utils.x86_64 0:6.2.0.872-13.el5 set to be updated ---> Package kernel.x86_64 0:2.6.18-308.1.1.el5 set to be installed ---> Package kernel-headers.x86_64 0:2.6.18-308.1.1.el5 set to be updated ---> Package kpartx.x86_64 0:0.4.7-48.el5_8.1 set to be updated ---> Package krb5-devel.x86_64 0:1.6.1-70.el5 set to be updated ---> Package krb5-libs.x86_64 0:1.6.1-70.el5 set to be updated ---> Package krb5-workstation.x86_64 0:1.6.1-70.el5 set to be updated ---> Package ksh.x86_64 0:20100621-5.el5_8.1 set to be updated ---> Package kudzu.x86_64 0:1.2.57.1.26-3.el5.centos set to be updated ---> Package less.x86_64 0:436-9.el5 set to be updated ---> Package lftp.x86_64 0:3.7.11-7.el5 set to be updated ---> Package libX11.x86_64 0:1.0.3-11.el5_7.1 set to be updated ---> Package libX11-devel.x86_64 0:1.0.3-11.el5_7.1 set to be updated ---> Package libXcursor.x86_64 0:1.1.7-1.2 set to be updated ---> Package libacl.x86_64 0:2.2.39-8.el5 set to be updated ---> Package libgcc.x86_64 0:4.1.2-52.el5 set to be updated ---> Package libgomp.x86_64 0:4.4.6-3.el5.1 set to be updated ---> Package libpng.x86_64 2:1.2.10-16.el5_8 set to be updated ---> Package libpng-devel.x86_64 2:1.2.10-16.el5_8 set to be updated ---> Package libsmbios.x86_64 0:2.2.27-3.2.el5 set to be updated ---> Package libstdc++.x86_64 0:4.1.2-52.el5 set to be updated ---> Package libstdc++-devel.x86_64 0:4.1.2-52.el5 set to be updated ---> Package libsysfs.x86_64 0:2.1.0-1.el5 set to be updated ---> Package libusb.x86_64 0:0.1.12-6.el5 set to be updated ---> Package libvolume_id.x86_64 0:095-14.27.el5_7.1 set to be updated ---> Package libxml2.x86_64 0:2.6.26-2.1.15.el5_8.2 set to be updated ---> Package libxml2-python.x86_64 0:2.6.26-2.1.15.el5_8.2 set to be updated ---> Package logrotate.x86_64 0:3.7.4-12 set to be updated ---> Package lsof.x86_64 0:4.78-6 set to be updated ---> Package lvm2.x86_64 0:2.02.88-7.el5 set to be updated ---> Package m2crypto.x86_64 0:0.16-8.el5 set to be updated ---> Package man.x86_64 0:1.6d-2.el5 set to be updated ---> Package man-pages.noarch 0:2.39-20.el5 set to be updated ---> Package mcelog.x86_64 1:0.9pre-1.32.el5 set to be updated ---> Package mesa-libGL.x86_64 0:6.5.1-7.10.el5 set to be updated ---> Package mesa-libGL-devel.x86_64 0:6.5.1-7.10.el5 set to be updated ---> Package microcode_ctl.x86_64 2:1.17-1.56.el5 set to be updated ---> Package mkinitrd.x86_64 0:5.1.19.6-75.el5 set to be updated ---> Package mktemp.x86_64 3:1.5-24.el5 set to be updated --> Processing Dependency: nash = 5.1.19.6-68.el5_6.1 for package: mkinitrd ---> Package nash.x86_64 0:5.1.19.6-75.el5 set to be updated ---> Package net-snmp.x86_64 1:5.3.2.2-17.el5 set to be updated ---> Package net-snmp-devel.x86_64 1:5.3.2.2-17.el5 set to be updated ---> Package net-snmp-libs.x86_64 1:5.3.2.2-17.el5 set to be updated ---> Package net-snmp-utils.x86_64 1:5.3.2.2-17.el5 set to be updated ---> Package net-tools.x86_64 0:1.60-82.el5 set to be updated ---> Package nfs-utils.x86_64 1:1.0.9-60.el5 set to be updated ---> Package nfs-utils-lib.x86_64 0:1.0.8-7.9.el5 set to be updated ---> Package nscd.x86_64 0:2.5-81.el5_8.1 set to be updated ---> Package nspr.x86_64 0:4.8.9-1.el5_8 set to be updated ---> Package nspr-devel.x86_64 0:4.8.9-1.el5_8 set to be updated ---> Package nss.x86_64 0:3.13.1-5.el5_8 set to be updated ---> Package nss-devel.x86_64 0:3.13.1-5.el5_8 set to be updated ---> Package nss-tools.x86_64 0:3.13.1-5.el5_8 set to be updated ---> Package nss_ldap.x86_64 0:253-49.el5 set to be updated ---> Package ntp.x86_64 0:4.2.2p1-15.el5.centos.1 set to be updated ---> Package numactl.x86_64 0:0.9.8-12.el5_6 set to be updated ---> Package oddjob.x86_64 0:0.27-12.el5 set to be updated ---> Package oddjob-libs.x86_64 0:0.27-12.el5 set to be updated ---> Package openldap.x86_64 0:2.3.43-25.el5 set to be updated ---> Package openssh.x86_64 0:4.3p2-82.el5 set to be updated ---> Package openssh-clients.x86_64 0:4.3p2-82.el5 set to be updated ---> Package openssh-server.x86_64 0:4.3p2-82.el5 set to be updated ---> Package openssl.i686 0:0.9.8e-22.el5_8.1 set to be updated ---> Package openssl.x86_64 0:0.9.8e-22.el5_8.1 set to be updated ---> Package openssl-devel.x86_64 0:0.9.8e-22.el5_8.1 set to be updated ---> Package pam_krb5.x86_64 0:2.2.14-22.el5 set to be updated ---> Package pam_pkcs11.x86_64 0:0.5.3-26.el5 set to be updated ---> Package pango.x86_64 0:1.14.9-8.el5.centos.3 set to be updated ---> Package parted.x86_64 0:1.8.1-29.el5 set to be updated ---> Package pciutils.x86_64 0:3.1.7-5.el5 set to be updated ---> Package perl.x86_64 4:5.8.8-38.el5 set to be updated ---> Package perl-Compress-Raw-Bzip2.x86_64 0:2.037-1.el5.rf set to be updated ---> Package perl-Compress-Raw-Zlib.x86_64 0:2.037-1.el5.rf set to be updated ---> Package perl-rrdtool.x86_64 0:1.4.7-1.el5.rf set to be updated ---> Package poppler.x86_64 0:0.5.4-19.el5 set to be updated ---> Package poppler-utils.x86_64 0:0.5.4-19.el5 set to be updated ---> Package popt.x86_64 0:1.10.2.3-28.el5_8 set to be updated ---> Package postgresql-libs.x86_64 0:8.1.23-1.el5_7.3 set to be updated ---> Package procps.x86_64 0:3.2.7-18.el5 set to be updated ---> Package proftpd.x86_64 0:1.3.4a-1.el5.rf set to be updated --> Processing Dependency: perl(Mail::Sendmail) for package: proftpd ---> Package python.x86_64 0:2.4.3-46.el5 set to be updated ---> Package python-ctypes.x86_64 0:1.0.2-3.el5 set to be updated ---> Package python-libs.x86_64 0:2.4.3-46.el5 set to be updated ---> Package python-smbios.x86_64 0:2.2.27-3.2.el5 set to be updated ---> Package rhpl.x86_64 0:0.194.1-2 set to be updated ---> Package rmt.x86_64 0:0.4b41-6.el5 set to be updated ---> Package rng-utils.x86_64 1:2.0-5.el5 set to be updated ---> Package rpm.x86_64 0:4.4.2.3-28.el5_8 set to be updated ---> Package rpm-build.x86_64 0:4.4.2.3-28.el5_8 set to be updated ---> Package rpm-devel.x86_64 0:4.4.2.3-28.el5_8 set to be updated ---> Package rpm-libs.x86_64 0:4.4.2.3-28.el5_8 set to be updated ---> Package rpm-python.x86_64 0:4.4.2.3-28.el5_8 set to be updated ---> Package rrdtool.x86_64 0:1.4.7-1.el5.rf set to be updated ---> Package rsh.x86_64 0:0.17-40.el5_7.1 set to be updated ---> Package rsync.x86_64 0:3.0.6-4.el5_7.1 set to be updated ---> Package ruby.x86_64 0:1.8.5-24.el5 set to be updated ---> Package ruby-libs.x86_64 0:1.8.5-24.el5 set to be updated ---> Package sblim-sfcb.x86_64 0:1.3.11-49.el5 set to be updated ---> Package sblim-sfcc.x86_64 0:2.2.2-49.el5 set to be updated ---> Package selinux-policy.noarch 0:2.4.6-327.el5 set to be updated ---> Package selinux-policy-targeted.noarch 0:2.4.6-327.el5 set to be updated ---> Package setup.noarch 0:2.5.58-9.el5 set to be updated ---> Package shadow-utils.x86_64 2:4.0.17-20.el5 set to be updated ---> Package smartmontools.x86_64 1:5.38-3.el5 set to be updated ---> Package smbios-utils-bin.x86_64 0:2.2.27-3.2.el5 set to be updated ---> Package smbios-utils-python.x86_64 0:2.2.27-3.2.el5 set to be updated ---> Package sos.noarch 0:1.7-9.62.el5 set to be updated ---> Package srvadmin-omilcore.x86_64 0:6.5.0-1.452.1.el5 set to be updated ---> Package strace.x86_64 0:4.5.18-11.el5_8 set to be updated ---> Package subversion.x86_64 0:1.6.11-7.el5_6.4 set to be updated ---> Package sudo.x86_64 0:1.7.2p1-13.el5 set to be updated ---> Package sysfsutils.x86_64 0:2.1.0-1.el5 set to be updated ---> Package syslinux.x86_64 0:3.11-7 set to be updated ---> Package system-config-network-tui.noarch 0:1.3.99.21-1.el5 set to be updated ---> Package talk.x86_64 0:0.17-31.el5 set to be updated ---> Package tar.x86_64 2:1.15.1-31.el5 set to be updated ---> Package traceroute.x86_64 3:2.0.1-6.el5 set to be updated ---> Package tzdata.x86_64 0:2012b-3.el5 set to be updated ---> Package udev.x86_64 0:095-14.27.el5_7.1 set to be updated ---> Package util-linux.x86_64 0:2.13-0.59.el5 set to be updated ---> Package vixie-cron.x86_64 4:4.1-81.el5 set to be updated ---> Package wget.x86_64 0:1.11.4-3.el5_8.1 set to be updated ---> Package xinetd.x86_64 2:2.3.14-16.el5 set to be updated ---> Package yp-tools.x86_64 0:2.9-2.el5 set to be updated ---> Package ypbind.x86_64 3:1.19-12.el5_6.1 set to be updated ---> Package yum.noarch 0:3.2.22-39.el5.centos set to be updated ---> Package yum-dellsysid.x86_64 0:2.2.27-3.2.el5 set to be updated ---> Package yum-fastestmirror.noarch 0:1.1.16-21.el5.centos set to be updated ---> Package zlib.x86_64 0:1.2.3-4.el5 set to be updated ---> Package zlib-devel.x86_64 0:1.2.3-4.el5 set to be updated --> Running transaction check --> Processing Dependency: dbus = 1.1.2-15.el5_6 for package: dbus-libs --> Processing Dependency: nash = 5.1.19.6-68.el5_6.1 for package: mkinitrd ---> Package perl-Mail-Sendmail.noarch 0:0.79-1.2.el5.rf set to be updated base/filelists | 3.5 MB 00:00 dell-omsa-indep/filelists | 195 kB 00:01 dell-omsa-specific/filelists | 1.0 kB 00:00 extras/filelists_db | 224 kB 00:00 rpmforge/filelists | 4.8 MB 00:06 updates/filelists_db | 715 kB 00:00 --> Finished Dependency Resolution dbus-libs-1.1.2-15.el5_6.i386 from installed has depsolving problems --> Missing Dependency: dbus = 1.1.2-15.el5_6 is needed by package dbus-libs-1.1.2-15.el5_6.i386 (installed) mkinitrd-5.1.19.6-68.el5_6.1.i386 from installed has depsolving problems --> Missing Dependency: nash = 5.1.19.6-68.el5_6.1 is needed by package mkinitrd-5.1.19.6-68.el5_6.1.i386 (installed) Error: Missing Dependency: nash = 5.1.19.6-68.el5_6.1 is needed by package mkinitrd-5.1.19.6-68.el5_6.1.i386 (installed) Error: Missing Dependency: dbus = 1.1.2-15.el5_6 is needed by package dbus-libs-1.1.2-15.el5_6.i386 (installed) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. I have tried running package-cleanup --dupes and package-cleanup --problems but to no avail.

    Read the article

  • ubuntu bind9 AppArmor read permission denied (chroot jail)

    - by Richard Whitman
    I am trying to run bind9 with chroot jail. I followed the steps mentioned at : http://www.howtoforge.com/debian_bind9_master_slave_system I am getting the following errors in my syslog: Jul 27 16:53:49 conf002 named[3988]: starting BIND 9.7.3 -u bind -t /var/lib/named Jul 27 16:53:49 conf002 named[3988]: built with '--prefix=/usr' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc/bind' '--localstatedir=/var' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-gnu-ld' '--with-dlz-postgres=no' '--with-dlz-mysql=no' '--with-dlz-bdb=yes' '--with-dlz-filesystem=yes' '--with-dlz-ldap=yes' '--with-dlz-stub=yes' '--with-geoip=/usr' '--enable-ipv6' 'CFLAGS=-fno-strict-aliasing -DDIG_SIGCHASE -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' Jul 27 16:53:49 conf002 named[3988]: adjusted limit on open files from 4096 to 1048576 Jul 27 16:53:49 conf002 named[3988]: found 4 CPUs, using 4 worker threads Jul 27 16:53:49 conf002 named[3988]: using up to 4096 sockets Jul 27 16:53:49 conf002 named[3988]: loading configuration from '/etc/bind/named.conf' Jul 27 16:53:49 conf002 named[3988]: none:0: open: /etc/bind/named.conf: permission denied Jul 27 16:53:49 conf002 named[3988]: loading configuration: permission denied Jul 27 16:53:49 conf002 named[3988]: exiting (due to fatal error) Jul 27 16:53:49 conf002 kernel: [74323.514875] type=1400 audit(1343433229.352:108): apparmor="DENIED" operation="open" parent=3987 profile="/usr/sbin/named" name="/var/lib/named/etc/bind/named.conf" pid=3992 comm="named" requested_mask="r" denied_mask="r" fsuid=103 ouid=103 Looks like the process can not read the file /var/lib/named/etc/bind/named.conf. I have made sure that the owner of this file is user bind, and it has the read/write access to it: root@test:/var/lib/named/etc/bind# ls -atl total 64 drwxr-xr-x 3 bind bind 4096 2012-07-27 16:35 .. drwxrwsrwx 2 bind bind 4096 2012-07-27 15:26 zones drwxr-sr-x 3 bind bind 4096 2012-07-26 21:36 . -rw-r--r-- 1 bind bind 666 2012-07-26 21:33 named.conf.options -rw-r--r-- 1 bind bind 514 2012-07-26 21:18 named.conf.local -rw-r----- 1 bind bind 77 2012-07-25 00:25 rndc.key -rw-r--r-- 1 bind bind 2544 2011-07-14 06:31 bind.keys -rw-r--r-- 1 bind bind 237 2011-07-14 06:31 db.0 -rw-r--r-- 1 bind bind 271 2011-07-14 06:31 db.127 -rw-r--r-- 1 bind bind 237 2011-07-14 06:31 db.255 -rw-r--r-- 1 bind bind 353 2011-07-14 06:31 db.empty -rw-r--r-- 1 bind bind 270 2011-07-14 06:31 db.local -rw-r--r-- 1 bind bind 2994 2011-07-14 06:31 db.root -rw-r--r-- 1 bind bind 463 2011-07-14 06:31 named.conf -rw-r--r-- 1 bind bind 490 2011-07-14 06:31 named.conf.default-zones -rw-r--r-- 1 bind bind 1317 2011-07-14 06:31 zones.rfc1918 What could be wrong here?

    Read the article

  • passwd ldap request to ActiveDirectory fails on half of 2500 users

    - by groovehunter
    We just setup ActiveDirectory in my company and imported all linux users and groups. On the linux client: (configured to ask ldap in nsswitch.conf): If i do a common ldapsearch to the AD ldap server i get the complete number of about 2580 users. But if i do this it only gets a part of all users, 1221 in number: getent passwd | wc -l Running it with strace shows kind of attempt to reconnect My ideas were: Does the linux authentication procedure run ldapsearch with a parameter incompatible to AD ldap ? Or probably it is a encoding issue. The windows user are entered in AD with all kind of characters. Maybe someone could shed light on this and give a hint how to debug that further!? Here's our ldap.conf host audc01.mycompany.de audc03.mycompany.de base ou=location,dc=mycompany,dc=de ldap_version 3 binddn cn=manager,ou=location,dc=mycompany,dc=de bindpw Password timelimit 120 idle_timelimit 3600 nss_base_passwd cn=users,cn=import,ou=location,dc=mycompany,dc=de?sub nss_base_group ou=location,dc=mycompany,dc=de?sub # RFC 2307 (AD) mappings nss_map_objectclass posixAccount User # nss_map_objectclass shadowAccount User nss_map_objectclass posixGroup Group nss_map_attribute uid sAMAccountName nss_map_attribute cn sAMAccountName # Display Name nss_map_attribute gecos cn ## nss_map_attribute homeDirectory unixHomeDirectory nss_map_attribute loginShell msSFU30LoginShell # PAM attributes pam_login_attribute sAMAccountName # Location based login pam_groupdn CN=Location-AU-Login,OU=au,OU=Location,DC=mycompany,DC=de pam_member_attribute msSFU30PosixMember ## pam_lookup_policy yes pam_filter objectclass=User nss_initgroups_ignoreusers avahi,avahi-autoipd,backup,bin,couchdb,daemon,games,gdm,gnats,haldaemon,hplip,irc,kernoops,libuuid,list,lp,mail,man,messagebus,news,proxy,pulse,root,rtkit,saned,speech-dispatcher,statd,sync,sys,syslog,usbmux,uucp,www-data and here the stacktrace from strace getent passwd poll([{fd=4, events=POLLIN|POLLPRI|POLLERR|POLLHUP}], 1, 120000) = 1 ([{fd=4, revents=POLLIN}]) read(4, "0\204\0\0\0A\2\1", 8) = 8 read(4, "\4e\204\0\0\0\7\n\1\0\4\0\4\0\240\204\0\0\0+0\204\0\0\0%\4\0261.2."..., 63) = 63 stat64("/etc/ldap.conf", {st_mode=S_IFREG|0644, st_size=1151, ...}) = 0 geteuid32() = 12560 getsockname(4, {sa_family=AF_INET, sin_port=htons(60334), sin_addr=inet_addr("10.1.35.51")}, [16]) = 0 getpeername(4, {sa_family=AF_INET, sin_port=htons(389), sin_addr=inet_addr("10.1.5.81")}, [16]) = 0 time(NULL) = 1297684722 rt_sigaction(SIGPIPE, {SIG_DFL, [], 0}, NULL, 8) = 0 munmap(0xb7617000, 1721) = 0 close(3) = 0 rt_sigaction(SIGPIPE, {SIG_IGN, [], 0}, {SIG_DFL, [], 0}, 8) = 0 rt_sigaction(SIGPIPE, {SIG_DFL, [], 0}, NULL, 8) = 0 rt_sigaction(SIGPIPE, {SIG_IGN, [], 0}, {SIG_DFL, [], 0}, 8) = 0 write(4, "0\5\2\1\5B\0", 7) = 7 shutdown(4, 2 /* send and receive */) = 0 close(4) = 0 shutdown(-1, 2 /* send and receive */) = -1 EBADF (Bad file descriptor) close(-1) = -1 EBADF (Bad file descriptor) exit_group(0) = ?

    Read the article

  • tproxy squid bridge very slow when cache is full

    - by Roberto
    I have installed a bridge tproxy proxy in a fast server with 8GB ram. The traffic is around 60Mb/s. When I start for first time the proxy (with the cache empty) the proxy works very well but when the cache becomes full (few hours later) the bridge goes very slow, the traffic goes below 10Mb/s and the proxy server becomes unusable. Any hints of what may be happening? I'm using: linux-2.6.30.10 iptables-1.4.3.2 squid-3.1.1 compiled with these options: ./configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --datadir=/usr/share --localstatedir=/var/lib --sysconfdir=/etc/squid --libexecdir=/usr/libexec/squid --localstatedir=/var --datadir=/usr/share/squid --enable-removal-policies=lru,heap --enable-icmp --disable-ident-lookups --enable-cache-digests --enable-delay-pools --enable-arp-acl --with-pthreads --with-large-files --enable-htcp --enable-carp --enable-follow-x-forwarded-for --enable-snmp --enable-ssl --enable-async-io=32 --enable-linux-netfilter --enable-epoll --disable-poll --with-maxfd=16384 --enable-err-languages=Spanish --enable-default-err-language=Spanish My squid.conf: cache_mem 100 MB memory_pools off acl manager proto cache_object acl localhost src 127.0.0.1/32 acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 acl to_localhost dst ::1/128 acl localnet src 10.0.0.0/8 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl net-g1 src xxx.xxx.xxx.xxx/24 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow net-g1 from where browsing should be allowed http_access allow localnet http_access allow localhost http_access deny all http_port 3128 http_port 3129 tproxy hierarchy_stoplist cgi-bin ? cache_dir ufs /var/spool/squid 8000 16 256 access_log none cache_log /var/log/squid/cache.log coredump_dir /var/spool/squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . I have this issue when the cache is full, but do not really know if it is because of that. Thanks in advance and sorry my english. roberto

    Read the article

  • How to create a Windows 7 installation usb media from linux ? (to install Windows 7) - Help need to know better method

    - by Abel Coto
    I have been reading some web pages and posts here and in other forums about how to create a Windows 7 installation Usb media (to install windows 7 using a usb) from linux. I asked in technet about this , and they give me general ideas about how to do it I personally am not very familiar with linux, but basicaly all that you need to do... in whatever way you do it is the following: Format a usb flash drive, either fat32 or ntfs create a partition that is large enough to host the windows installation (give or take 3GB for 64bit, aroudn 2.5gb for 32bit) and mark that partition as active/bootable. Since this can be done with windows, but just as well with a tool like gparted, you should be able to do the same in debian. Once you have created that partition, mount the iso that you download, and copy all files starting from the root, into the root of the usb flash drive. That's all there's to it. There is a method that i found in various places,that is almost the same that the man of technet has said. But,there is a step,that in that method is done,that i don't know if it is really necessary,or not. Not allways dd works.Basically, the missing step was to write a proper boot sector to the usb stick, which can be done from linux with ms-sys. This works with the Win7 retail version. Here is the complete rundown again: Install ms-sys Check what device your usb media is asigned - here we will assume it is /dev/sdb. Delete all partitions, create a new one taking up all the space, set type to NTFS, and set it bootable: *# cfdisk /dev/sdb* Create NTFS filesystem: *# mkfs.ntfs -f /dev/sdb1* Mount iso and usb media: *# mount -o loop win7.iso /mnt/iso # mount /dev/sdb1 /mnt/usb* Copy over all files: *# cp -r /mnt/iso/* /mnt/usb/* Write Windows 7 MBR on usb stick: *# ms-sys -7 /dev/sdb* ...and you're done. Shouldn't the usb work without doing the last step "# ms-sys -7 /dev/sdb" or to make the usb bootable , is a must , not only to mark the partition as bootable ? Would be better use rsync instead of cp -r ? All this steps should be done as root, i suppose , or if not , chmod to 664 and chown the directories where are mounted the usb and the iso, no ? But i suppose that the easier thing is to copy the data as root , and that this will not affect to the data. Has anyone tried this method or some similar like copying the iso with dd ?

    Read the article

  • autocomplete not working on one sever, works on others

    - by dogmatic69
    I have Ubuntu 10.10 x64 and x86 running on various servers and auto complete works on all of them bar one. The issue: apt-<tab> would show a list of options but sudo apt-<tab> would not. After fiddling with it for a few hours i've found that /etc/bash_autocomplete did not exist. on the broken server. Copying the one from a working one it now works. but still not properly. sudo apt-get ins<tab> does not show do anything. listing the files in /etc/bash_autocomplete.d/ on the working server has about 50 files, and the broken one only two or three. i dont think that i can just copy these files though as it might show commands for things that are not even installed. TL;DR autocomplete broken, how can i fix it. Seems like its disabled somewhere, why is this EDIT: Ok, it was not ever installed... $ sudo apt-get install bash-completion Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed bash-completion 0 upgraded, 1 newly installed, 0 to remove and 3 not upgraded. Need to get 140kB of archives. After this operation, 1,061kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu/ maverick-updates/main bash-completion all 1:1.2-2ubuntu1.1 [140kB] Fetched 140kB in 0s (174kB/s) Selecting previously deselected package bash-completion. (Reading database ... 23808 files and directories currently installed.) Unpacking bash-completion (from .../bash-completion_1%3a1.2-2ubuntu1.1_all.deb) ... Processing triggers for man-db ... Setting up bash-completion (1:1.2-2ubuntu1.1) ... its now kinda working, but still wonky... apt-get ins<tab> gives sudo apt-get insserv as the option. also apt-get install php5<tab> gives apt-get install php5/ not php5-* options.

    Read the article

  • dnsmasq acts as the DHCP server for selected nodes overriding the existing DHCP server on the same LAN?

    - by user183394
    I am trying to set up a small "lab" at home. Like many modern homes, I have a regular DSL service which comes with a 2Wire 3600HGV router, which acts also as a DHCP server. Since I would like to PXE boot a few computers in my "lab" The 2Wire is inflexible to adjustments that I want to do I have used dnsmasq at work so I would like to use dnsmasq as the DHCP server for the few nodes in my "lab" if feasible. In the dnsmasq man page, there is the following: [...] -K, --dhcp-authoritative (IPv4 only) Should be set when dnsmasq is definitely the only DHCP server on a network. It changes the behaviour from strict RFC compliance so that DHCP requests on unknown leases from unknown hosts are not ignored. This allows new hosts to get a lease without a tedious timeout under all circumstances. It also allows dnsmasq to rebuild its lease database without each client needing to reacquire a lease, if the database is lost. [...] As far as I know, the ISC DHCP server can use the following to do what I would like to accomplish: authoritative; [...] subnet 192.168.1.0 netmask 255.255.255.0 { host nb0 { # only give DHCP information to this computer: hardware ethernet e8:9a:8f:17:70:42; fixed-address 192.168.1.10; option subnet-mask 255.255.255.0; option routers 192.168.1.254; option domain-name-servers 192.168.1.254; # Non-essential DHCP options filename "/pxelinux.0"; } [...] But I much prefer dnsmasq's "all-in-one-ness". My question: do I have to couple the -K option with something else? As shown in the example above, the ISC DHCP server requires the mac addresses of managed nodes to be explicitly specified. Does dnsmasq have something similar? FYI, the machine on which I plan to run dnsmasq runs CentOS 6.3 64bit. It has a statically assigned IP address: 192.168.1.3.

    Read the article

  • Output php mail calls to log file

    - by Tom McQuarrie
    This question relates to the question found here: Find the php script thats sending mails Trying to do the exact same thing but can't get the log to output what I need. Not too experienced with serverfault and ideally I'd post my followup on the original question, or PM adam to see if he ever found a solution, but looks as though server fault doesn't work that way. I can post an "answer" but that's definitely not what this is. I have a script located at /usr/local/bin/sendmail-php-logged, with the following: #!/bin/sh logger -p mail.info sendmail-php: site=${HTTP_HOST}, client=${REMOTE_ADDR}, script=${SCRIPT_NAME}, filename=${SCRIPT_FILENAME}, docroot=${DOCUMENT_ROOT}, pwd=${PWD}, uid=${UID}, user=$(whoami) /usr/sbin/sendmail -t -i $* This is logging to /var/log/maillog, but as Adam mentions in his question, none of the server variables work. Output I'm getting is: Oct 4 12:16:21 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/var/www/html/aro_chroot/sites/arocms, uid=48, user=apache Oct 4 12:16:21 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/var/www/html/aro_chroot/sites/arocms, uid=48, user=apache Oct 4 12:17:03 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/var/www/html/aro_chroot/sites/arocms, uid=48, user=apache Oct 4 12:17:05 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/root, uid=0, user=root Oct 4 12:17:11 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/var/www/html/aro_chroot/sites/arocms, uid=48, user=apache Oct 4 12:17:14 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/root, uid=0, user=root Oct 4 12:17:29 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/root, uid=0, user=root Oct 4 12:17:41 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/root, uid=0, user=root User ID, current user, and pwd are all working, probably because they're globally accessible script resources, and not specific to PHP, like all the others are. I've tried using other server variables as per labradort's instructions, but no joy. Here's some sample tests: logger -p mail.info sendmail-php SCRIPT_NAME: ${SCRIPT_NAME} logger -p mail.info sendmail-php SCRIPT_FILENAME: ${SCRIPT_FILENAME} logger -p mail.info sendmail-php PATH_INFO: ${PATH_INFO} logger -p mail.info sendmail-php PHP_SELF: ${PHP_SELF} logger -p mail.info sendmail-php DOCUMENT_ROOT: ${DOCUMENT_ROOT} logger -p mail.info sendmail-php REMOTE_ADDR: ${REMOTE_ADDR} logger -p mail.info sendmail-php SCRIPT_NAME: $SCRIPT_NAME logger -p mail.info sendmail-php SCRIPT_FILENAME: $SCRIPT_FILENAME logger -p mail.info sendmail-php PATH_INFO: $PATH_INFO logger -p mail.info sendmail-php PHP_SELF: $PHP_SELF logger -p mail.info sendmail-php DOCUMENT_ROOT: $DOCUMENT_ROOT logger -p mail.info sendmail-php REMOTE_ADDR: $REMOTE_ADDR And the output: Oct 4 12:58:02 fluke logger: sendmail-php SCRIPT_NAME: Oct 4 12:58:02 fluke logger: sendmail-php SCRIPT_FILENAME: Oct 4 12:58:02 fluke logger: sendmail-php PATH_INFO: Oct 4 12:58:02 fluke logger: sendmail-php PHP_SELF: Oct 4 12:58:02 fluke logger: sendmail-php DOCUMENT_ROOT: Oct 4 12:58:02 fluke logger: sendmail-php REMOTE_ADDR: Oct 4 12:58:02 fluke logger: sendmail-php SCRIPT_NAME: Oct 4 12:58:02 fluke logger: sendmail-php SCRIPT_FILENAME: Oct 4 12:58:02 fluke logger: sendmail-php PATH_INFO: Oct 4 12:58:02 fluke logger: sendmail-php PHP_SELF: Oct 4 12:58:02 fluke logger: sendmail-php DOCUMENT_ROOT: Oct 4 12:58:02 fluke logger: sendmail-php REMOTE_ADDR: I'm running php 5.3.10. Unfortunately register_globals is on, for compatibility with legacy systems, but you wouldn't think that would cause the environment variables to stop working. If someone can give me some hints as to why this might not be working I'll be a very happy man :)

    Read the article

  • How can I install things in Linux with *no yum* and *no wget*?

    - by e9t
    I'm a newbie to Linux (that mainly uses Windows and Mac OS X) needing some advice. I was trying to install git on a Linux machine today, and encountered some problems: Not knowing the version of the installed OS, I've opened the /proc/version file which said: Linux version 2.6.9-42.0.2.ELsmp ([email protected]) (gcc version 3.4.6 20060404 (Red Hat 3.4.6-3)) #1 SMP Thu Aug 17 17:57:31 EDT 2006 Then, as written in the git documents (http://git-scm.com/download/linux), I assumed I could use the yum install git command for Fedora, but got the following result. [root@myserver ~]# yum install git -bash: yum: command not found So I tried installing yum using wget, but wasn't so lucky. [root@myserver ~]# wget http://linux.duke.edu/projects/yum/download/2.0/yum-2.0.7.tar.gz -bash: wget: command not found I googled and found this page and this page, so tried installing yum with rpm, but only got a result full of question marks. (Possibly an encoding problem, hmm...) [root@myserver ~]# rpm -Uvh http://www.eomy.net/linux/install-yum-x86_64/wget-1.10.2-0.40E.x86_64.rpm http://www.eomy.net/linux/install-yum-x86_64/wget-1.10.2-0.40E.x86_64.rpm(??)?? ?????? ?: /var/tmp/rpm-xfer.TbuAOu: V3 DSA signature: NOKEY, key ID 443e1821 ???.. ########################################### [100%] wget-1.10.2-0.40E U???????g??????? wget-1.10.2-0.40E???? ??g??/usr/bin/wget ?? wget-1.10.2-0.40E U?????? ???? wget-1.10.2-0.40E???? ??g??/usr/share/man/man1/wget.1.gz ?? wget-1.10.2-0.40E U?????? ???? [root@myserver ~]# Finally, when I typed rpm --version in the terminal, I got the below results. [root@myserver ~]# rpm --version RPM ???? - 4.3.3 I would like to know what I can do or possibly try now. Is it not possible to wget or yum anything in my situation? Or is there any magical tool like homebrew (http://mxcl.github.com/homebrew/) that I can use? Any comments or advice would be appreciated. Thanks in advance!

    Read the article

  • radvd is not assigning prefix

    - by Samik
    I'm currently trying to setup IPv6 address auto-configuration with router advertisement daemon (radvd) on a virtual machine running CentOS 6.5. But the eth0 interface is not obtaining that prefix. I've obtained the ULA prefix from here. Contents of /etc/sysctl.conf # Kernel sysctl configuration file for Red Hat Linux # # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and # sysctl.conf(5) for more details. # Controls IP packet forwarding net.ipv4.ip_forward = 0 net.ipv6.conf.all.forwarding = 1 # Controls source route verification net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel kernel.sysrq = 0 # Controls whether core dumps will append the PID to the core filename. # Useful for debugging multi-threaded applications. kernel.core_uses_pid = 1 # Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 # Disable netfilter on bridges. net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 # Controls the default maxmimum size of a mesage queue kernel.msgmnb = 65536 # Controls the maximum size of a message, in bytes kernel.msgmax = 65536 # Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 # Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 Contents of /etc/radvd.conf # NOTE: there is no such thing as a working "by-default" configuration file. # At least the prefix needs to be specified. Please consult the radvd.conf(5) # man page and/or /usr/share/doc/radvd-*/radvd.conf.example for help. # # interface eth0 { AdvSendAdvert on; MinRtrAdvInterval 3; MaxRtrAdvInterval 10; AdvDefaultPreference low; AdvHomeAgentFlag off; prefix fd8a:8d9d:808f:1::/64 { AdvOnLink on; AdvAutonomous on; AdvRouterAddr on; }; }; Contents of /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=52:54:00:74:d7:46 TYPE=Ethernet UUID=af5db1cb-e809-4098-be1a-5a74dbb767b1 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=dhcp IPV6INIT=yes IPV6_AUTOCONF=yes I've also enabled radvd at startup through chkconfig. Though I noticed that radvd is starting after interfaces are brought up. I've tried restarting the network service afterwards but still I get the following link-local address only #ip -6 addr show 1: lo: mtu 16436 inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qlen 1000 inet6 fe80::5054:ff:fe74:d746/64 scope link valid_lft forever preferred_lft forever Edit: Based on the answer given by Sander Steffann I still need clarification on some points but I'm posting here what worked. Contents of /etc/sysconfig/network NETWORKING=yes HOSTNAME=syslog-ng-server NETWORKING_IPV6=yes IPV6FORWARDING=yes Contents of /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=52:54:00:74:d7:46 TYPE=Ethernet UUID=af5db1cb-e809-4098-be1a-5a74dbb767b1 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=dhcp IPV6INIT=yes IPV6_AUTOCONF=yes IPV6FORWARDING=no Removed following line from /etc/sysctl.conf net.ipv6.conf.all.forwarding = 1 Contents of /etc/radvd.conf is as previous.

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >