Search Results

Search found 19962 results on 799 pages for 'edo post'.

Page 297/799 | < Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >

  • Changes to File Store Provider in UCM PS3

    - by Kevin Smith
    In the recent PS3 release of UCM (11.1.1.4.0) there are some significant changes to the File Store Provider (FSP) configuration. For new PS3 installs (not upgrades from PS2) the FSP default storage rule includes a dispersion rule that will change the web-layout and vault paths by adding dispersion directories to the paths to limit the number of files in the vault and web-layout directories. What that means is that if you install a new PS3 UCM instance and migrate content in from a previous version of UCM the web URL will change. That is a critical problem for web sites and just general document management. See below for some details on the FSP configuration in PS3 and how you can change the default behavior. use the link below to read the rest of this post where I describe the issue in detaill and provide instructions for how to modify a PS3 instance to use the old format for the web-layout path.

    Read the article

  • When is a Use Case layer needed?

    - by Meta-Knight
    In his blog post The Clean Architecture Uncle Bob suggests a 4-layer architecture. I understand the separation between business rules, interfaces and infrastructure, but I wonder if/when it's necessary to have separate layers for domain objects and use cases. What added value will it bring, compared to just having the uses cases as "domain services" in the domain layer? The only useful info I've found on the web about a use case layer is an article by Martin Fowler, who seems to contradict Uncle Bob about its necessity: At some point I may run into the problems, and then I'll make a Use Case Controller - but only then. And even when I do that I rarely consider the Use Case Controllers to occupy a separate layer in the system architecture. Edit: I stumbled upon a video of Uncle Bob's Architecture: The Lost Years keynote, in which he explains this architecture in depth. Very informative.

    Read the article

  • ArchBeat Link-o-Rama for 2012-08-31

    - by Bob Rhubart
    SOA Suite 11g Asynchronous Testing with soapUI | Greg Mally Greg Mally walks you through testing asynchronous web services with the free edition of soapUI. The Role of Oracle VM Server for SPARC in a Virtualization Strategy | Matthias Pfutzner Matthias Pfutzner's overview of hardware and software virtualization basics, and the role that Oracle VM Server for SPARC plays in a virtualization strategy. Cloud Computing: Oracle RDS on AWS - Connecting with DB tools | Tom Laszewski Cloud expert and author Tom Laszewski shares brief comments about the tools he used to connect two Oracle RDS instances in AWS. Keystore Wallet File – cwallet.sso – Zum Teufel! | Christian Screen "One of the items that trips up a FMW implementation, if only for mere minutes, is the cwallet.sso file," says Oracle ACE Christian Screen. In this short post he offers information to help you avoid landing on your face. Thought for the Day "With good program architecture debugging is a breeze, because bugs will be where they should be." — David May Source: SoftwareQuotes.com

    Read the article

  • More About PeopleSoft Feature Packs

    - by john.webb(at)oracle.com
    In my previous PeopleSoft Feature Pack post I introduced the new PeopleSoft Feature Pack delivery process. The response has been fantastic. It appears our customers agree that this new offering benefits them in many ways.   Since there has been so much interest in our Feature Pack strategy and since so many customers have been referencing our PeopleSoft FAQ in which we explain this new delivery mechanism, we've created the short presentation below to further explain Feature Packs.    

    Read the article

  • Moodle Inconsistencies, Flowplayer, or Server?

    - by dglickler
    We are trying to decide if an issue we're having with Moodle and our videos is a flowplayer issue, a video issue, or a network issue. Any input is welcome. We've had videos in our Moodle (version 1.9, we're working on an upgrade on a different server) up for weeks, or even months. After that time, some of them have suddenly just stopped working. When they don't work, they just don't load. The videos work when we first upload them. With flowplayer, we don't get errors, just a blank screen. We have re-uploaded our videos several times when this has happened, but we really would like to know what's causing it so that we can prevent it. There are also no keyframe issues with the video. We are currently trying to find answers through various searches, but have not had any luck. I will continue to post more info as I come across it...but if there's anyone who knows or has ideas, it's welcome.

    Read the article

  • Resources for popular domain models

    - by Songo
    I have come across many situations where I had to build a system for a library or a clinic or other popular domains. The thing is a domain model for a library was probably done 1000 times already with different level of details of course. Here is an example. Is there a popular website or community where one can find ready made domain models for popular systems? The whole purpose I'm trying to achieve is to quickly get a grasp of the domain I'm modeling and customize it to my needs. Re-inventing the wheel seems really absurd when the same system might have been modeled properly previously. Note I know Google might sound like the perfect source, but there is a repository out there that people can post there models, so that others can share them.

    Read the article

  • How to fix “Unit Test Runner failed to load test assembly”

    - by ybbest
    I encountered this issue a couple times during my recent project, every time I forgot what actually cause the issue. Therefore, I decide to write a quick blog post to make sure I can identify the issue quickly. Problem: Run unit test using a test runner and received a Unit Test Runner failed to load test assembly exception. Analysis: Basically, I have changed some code and start the test runner to run tests. The same dll have already been deployed to GAC. So the test runner actually tries to use the old version of the assembly thus could not load the assembly. Solution: Deploy the current version of dll to the GAC and re-run your test, it works like a charm.

    Read the article

  • Restore database to the point of disaster

    - by TiborKaraszi
    This is really basic, but so often overlooked and misunderstood. Basically, we have a database, and something goes south. Can we restore all the way up to that point? I.e., even if the last backup (db or log) is earlier than the disaster? Yes, of course we can (unless for more extreme cases, read on), but many don't realize/do that, for some strange reason. This blog post was inspired from a thread in the MSDN forums, which exposed just this misunderstanding. Basically the scenario was that they...(read more)

    Read the article

  • Extension Manager in Visual Studio 2010

    One of the powerful aspect of Visual Studio is its ability to be extended and many people do that. You can find numerous extensions at the Visual Studio Gallery. The VSX team links to a 4-part blog series on how to create and share templates. You can also look find extension examples on the vsx code gallery.With Visual Studio 2010, you can search for items and install them directly from within Visual Studio's new Extension Manager. You launch it from the Tools menu:When the dialog comes up, be sure to explore the various actionable areas on the left and also note the search on the right. For example, I typed "MP" and it quickly filtered the list to show me the MPI Project Template:Others have written about this before me, just bing Extension Manager (and note that Beta2 introduced changes, some of which you can witness in the screenshot above). Comments about this post welcome at the original blog.

    Read the article

  • Game-a-Week One Completed

    - by Matt Christian
    Last night I finished my Game-a-Week One and felt and extreme sense of accomplishment with what I finished in a single week. I removed all traces of the JigLibX code since it wasn't working properly due to my implementation and got my collision working thanks to some BoundingSpheres and Riemer's tutorials.  However, since the characters are Corndogs a rectangle bounding box would have made more sense although the bullets are only able to move forward currently. While developing it in a week was a challenge, developing it in a week while maintaining proper coding standards and clean, reusable code was damn near impossible.  It's possible my next step will be to either refactor it or move onto Game-a-Week Two. I will post a link to the game ZIP in the future.

    Read the article

  • First Foray&ndash;About timeout

    - by SQLMonger
    It has been quite a while since I signed up for this blog site and high time that something was posted.  I have a list of topics that I will be working through and posting.  Some I am sure will have been posted by others, but I will be sticking to the technical problems and challenges that I’ve recently faced, and the solutions that worked for me.  My motto when learning something new has always been “My kingdom for an example!”, and I plan on delivering useful examples here so others can learn from my efforts, failures and successes.   A bit of background about me… My name is Clayton Groom. I am a founding partner of a consulting firm in St. Louis Missouri, Covenant Technology Partners, LLC and focus on SQL Server Data Warehouse design, Analysis Services and Enterprise Reporting solutions.  I have been working with SQL Server since the early nineties, when it still only ran on OS/2. I love solving puzzles and technical challenges.   Enough about me… On to a real problem… SSIS Connection Time outs versus Command Time outs Last week, I was working on automating the processing for a large Analysis Services cube.  I had reworked an SSIS package and script task originally posted by Vidas Matelis that automates the process of adding new and dropping old partitions to/from an Analysis Services cube.  I had the package working great, tested, and ready for deployment.  It basically performs a query against the source system to determine if there is new data in the warehouse that will require a new partition to be added to the cube, and it checks the cube to see if there are any partitions that are present that are no longer needed in a rolling 60 month window. My client uses Tivoli for running all their production jobs, and not SQL Agent, so I had to build a command line file for Tivoli to use to run the package. Everything was going great. I had tested the command file from my development workstation using an XML configuration file to pass in server-specific parameters into the package when executed using the DTExec utility. With all the pieces ready, I updated the dtsconfig file to point to the UAT environment and started working with the Tivoli developer to test the job.  On the first run, the job failed, and from what I could see in the SSIS log, it had failed because of a timeout. Other errors in the log made me think that perhaps the connection string had not been passed into the package correctly. We bumped the Connection Manager  timeout values from 20 seconds to 120 seconds and tried again. The job still failed. After changing the command line to use the /SET option instead of the /CONFIGFILE option, we tested again, and again failure. After a number more failed attempts, and getting the Teradata DBA involved to monitor and see if we were connecting and failing or just failing to connect, we determined that the job was indeed connecting to the server and then disconnecting itself after 30 seconds.  This seemed odd, as we had the timeout values for the connection manager set to 180 seconds by then.  At this point one of the DBA’s found a post on the Teradata forum that had the clues to the puzzle: There is a separate “CommandTimeout” custom property on the Data source object that may needed to be adjusted for longer running queries.  I opened up the SSIS package, opened the data flow task that generated the partition list table and right-clicked on the data source. from the context menu, I selected “Show Advanced Editor” and found the property. Sure enough, it was set to 30 seconds. The CommandTimeout property can also be edited in the SSIS Properties sheet. In order to determine how long the timeout needed to be, I ran the query from the task in the development environment and received a response in a matter of seconds.  I then tried the same query against the production database and waited several minutes for a response. This did not seem to be a reasonable response time for the query involved, and indeed it wasn’t. The Teradata DBA’s adjusted the query governor settings for the service account I was testing with, and we were able to get the response back down under a minute.  Still, I set the CommandTimeout property to a much higher value in case the job was ever started during a time of high-demand on the production server. With this change in place, the job finally completed successfully.  The lesson learned for me was two-fold: Always compare query execution times between development and production environments, and don’t assume that production will always be faster.  With higher user demands, query governors, and a whole lot more data, the execution time of even what might seem to be simple queries can vary greatly. SSIS Connection time out settings do not affect command time outs.  Connection timeouts control how long the package will wait for a response from the server before assuming the server is not available or is not responding. Command time outs control how long a task will wait for results to start being returned before deciding that the server is not responding. Both lessons seem pretty straight forward, and I felt pretty sheepish once I finally figured out what the issue was.  To be fair though, In the 5+ years that I have been working with SSIS, I could only recall one other time where I had to set the CommandTimeout property, and that memory only resurfaced while I was penning this post.

    Read the article

  • Determining Cost of API Calls

    - by Sam
    [This is a cross-post originally posted by me in SO. I think the question is more appropriate here.] I was going through the adwords API and came across their rate sheet - http://code.google.com/apis/adwords/docs/ratesheet.html . They charge $0.25 per 1000 API units and under the 'Operation Costs' sections list the cost (in API units) of different API calls. I am curious - based on what factors do they (and others API developers) calculate the cost of an API call? Is there any simple formula or a standard way to determine this? Note: When I say 'cost' of an API call, I don't mean the money but the API units. For example, how do you determine one API call costs 100 'units' and another 1000?

    Read the article

  • New T-SQL Functionality in SQL Server 2008

    - by ejohnson2010
    In my most recent posts I have looked at a few of the new features offered in T-SQL in SQL Server 2008. In this post, I want to take a closer look at some of the smaller additions, but additions that are likely to pack a big punch in terms of efficiency. First let’s talk a little about compound operators. This is a concept that has been around in programming languages for a long time, but has just now found its way into T-SQL. For example, the += operator will add the values to the current variable...(read more)

    Read the article

  • Will small random dynamic snippets break caching

    - by Saif Bechan
    I am busy writing a WordPress plugin. Now most users have cache plugins installed, they cache the pages. I know also some webservers as nginx have php caching and whatnot. There are also things like memcached. Now I have to admit I do not know exactly how they work, if anyone have some good links on how they work I would be glad. Some links where it's explained simple, not to technical. Now the question. My plugin displays different statistics on posts, they are always different, will this break the caching of the page. To take is a step further, sometimes the statistics of the post needs updating, and there is a small javascript snippet added to the page. Now will these two action result in the page not caching, or am I ok.

    Read the article

  • Announcing the Winnipeg Visual Studio.NET 2010 Launch Event!

    - by D'Arcy Lussier
    That’s right Winnipeg, we’re having our own Visual Studio.NET 2010 launch event on May 11th brought to you by your local Winnipeg .NET User Group, Anvil Digital, Imaginet, Microsoft, and Protegra! We’re excited to bring a day of sessions highlighting developer productivity, application lifecycle management, and web development using these new technologies! We’re also thrilled to have this event at the IMAX Theatre at Portage Place! The day looks like this: The event is FREE and we’re providing a continental breakfast for attendees. To register for the event, visit our registration site here. If you have any questions, please contact me through comments on this post or via email through my blog. D’Arcy

    Read the article

  • Best questions to ask a startup founder, CTO, or CEO

    - by YGomez
    This is not a duplicate of this SE post. Most questions of this sort center around an interview experience. I want questions that you might be genuinely interested in, even if they are not at all appropriate to ask during a job interview. If you have read the book Founders at Work, that is the kind of question I am talking about. So I guess, what would you ask if you were interviewing them? I am specially interested in questions that might give a possible future startup founder insight.

    Read the article

  • Many Different Things Rolled into a Ball

    - by MOSSLover
    Yeah I know I don’t blog much anymore, because life has taken me places that don’t involve the interwebs unfortunately.  I am in the midst of planning two events, starting a non for profit, creating more sessions for various conferences, submitting to various conferences, working a 40 hour a week job, attempting to hang out with boyfriend/friends/family.  So you can see that list does not include this blog sadly that’s how it goes sometimes.  The bottom piece very important over any of the top pieces.  I haven’t seen St. Louis in a while and I get to go back.  I was gone from home for MVP Summit and Best Practices Conference, so the boyfriend and cat didn’t get to see me either for a bit.  Then you have to add in the whole toilet being broken fiasco this week.  Maintenance really thought it would be cool to turn off the ability to flush.  I mean who does that?  Then when we call the owner he comes by turns it on and we figure it was an accident, because well the next day no one came by to tell us there was a leak.  It was all kinds of strangeness and involved me running to other people’s toilets.  As Dan Usher would say, I was a sad panda for a few days.  So I guess I wanted to post a few thoughts here just because I can.  I do not like multiple content editor webparts embedded with html files in numerous pages doing the same thing.  I will tell you why I don’t like these particular webparts and the way they are being used.  First off if you have a bunch of pages with script includes it’s about time you should just dump them into the masterpage.  Why bother finding all 20 pages and changing those pages when you can just use a single masterpage that already exists? The other thing that is bothering me days is screen scraping.  Just don’t do it, because in 2010 you will find the UI is substantially slower.  I understand you are new and you have no idea what to do.  You are also using 2007 am I right?  So then you need to go to codeplex.com and type in a search for SPServices.  Download it, use it, love it and then have it’s babies (well maybe don’t go so far this is not the GRID in Tron). If you have a ton of constants in your code why did you not go in and create a webpart with a bunch of properties and/or link to a configuration list hidden in the browser?  This type of property and list could help you out in the long run.  The power users and administrators can now change the control without you having to compile it over and over again.  It’s good stuff.  Also, you can change the control without compiling it, especially in 2007 where you have to do a farm solution.  In 2010 you can do a sandbox solution I guess, but shouldn’t you make it as easy and supportable as possible for other users? In conclusion I’m an angry person when it comes to viewing something repeatedly and analyzing it in a system.  Now we will move on to the next topic…MVP Summit…So yeah I can’t really talk about particulars, but I can talk about my experience as a person.  Don’t build something up to be cooler than it is only to be dropped from your 10,000 foot perch.  My experience was great, but the content overall was something to be desired.  It’s ok I got to meet a lot of people I would not have met if I had not gone.  Some of it was surreal, such as product group members showing up and talking to us.  It was pretty neat.  Plus I never had the chance to get to that mythical MS Office in Redmond.  Prior to Summit it was like Rainbow Brites unicorn trying taunting me on television when I was a kid.  So I guess with all that said I give it a B.  It was awesome in some way, but lacking in other ways.  The cool part is that I got to go.  Would I have lived without going? Yes, but it was still cool. I could prattle on about other things and make this post massive, but I’m going to pass and give myself a piece of Sunday to play Rockband and do 800 other things.  I hope the two of you who read this blog are well.  I’ll catch you all at another juncture.  Have a good weekend and varying holidays in between. Technorati Tags: SharePoint,MVP Summit,JQuery,Javascript

    Read the article

  • Ubuntu 11.10 and Atheros AR8131 ethernet card

    - by nivcaner
    I have an Atheros AR8131 Ethernet card on a Lenovo b560 laptop. Sometime in the past, probably at some upgrade or other my wired connection stopped functioning. I know the card is ok because it works with windows. I tried upgrading to Ubuntu 11.10 hoping the problem would go away. It didn't... Tried to google the problem, but all of the solutions I found didn't work for me. This is probably a driver problem. Any help will be appreciated. Please note that I'm a linux newbie so I don't really know what logs to post... Niv

    Read the article

  • PhP Login/Register system [migrated]

    - by Marian
    I found this good tutorial on creating a login/register system using PhP and MySQL. The forum is around 5 years old (edited last year) but it can still be usefull. Beginner Simple Register-Login system There seems to be an issue with both login and register pages. <?php function register_form(){ $date = date('D, M, Y'); echo "<form action='?act=register' method='post'>" ."Username: <input type='text' name='username' size='30'><br>" ."Password: <input type='password' name='password' size='30'><br>" ."Confirm your password: <input type='password' name='password_conf' size='30'><br>" ."Email: <input type='text' name='email' size='30'><br>" ."<input type='hidden' name='date' value='$date'>" ."<input type='submit' value='Register'>" ."</form>"; } function register(){ $connect = mysql_connect("host", "username", "password"); if(!$connect){ die(mysql_error()); } $select_db = mysql_select_db("database", $connect); if(!$select_db){ die(mysql_error()); } $username = $_REQUEST['username']; $password = $_REQUEST['password']; $pass_conf = $_REQUEST['password_conf']; $email = $_REQUEST['email']; $date = $_REQUEST['date']; if(empty($username)){ die("Please enter your username!<br>"); } if(empty($password)){ die("Please enter your password!<br>"); } if(empty($pass_conf)){ die("Please confirm your password!<br>"); } if(empty($email)){ die("Please enter your email!"); } $user_check = mysql_query("SELECT username FROM users WHERE username='$username'"); $do_user_check = mysql_num_rows($user_check); $email_check = mysql_query("SELECT email FROM users WHERE email='$email'"); $do_email_check = mysql_num_rows($email_check); if($do_user_check > 0){ die("Username is already in use!<br>"); } if($do_email_check > 0){ die("Email is already in use!"); } if($password != $pass_conf){ die("Passwords don't match!"); } $insert = mysql_query("INSERT INTO users (username, password, email) VALUES ('$username', '$password', '$email')"); if(!$insert){ die("There's little problem: ".mysql_error()); } echo $username.", you are now registered. Thank you!<br><a href=login.php>Login</a> | <a href=index.php>Index</a>"; } switch($act){ default; register_form(); break; case "register"; register(); break; } ?> Once pressed the register button the page does nothing, fields are erased and no data is added inside the database or error given. I tought that the problem might be the switch($act){ part so I removed it and changed the page using a require require('connect.php'); where connect.php is <?php mysql_connect("localhost","host","password"); mysql_select_db("database"); ?> Removed the function register_form(){ and echo part turning it into an HTML code: <form action='register' method='post'> Username: <input type='text' name='username' size='30'><br> Password: <input type='password' name='password' size='30'><br> Confirm your password: <input type='password' name='password_conf' size='30'><br> Email: <input type='text' name='email' size='30'><br> <input type='hidden' name='date' value='$date'> <input type='submit' name="register" value='Register'> </form> And instead of having a function register(){ I replaced it with a if($register){ So when the Register button is pressed it runs the php code, but this edit doesn't seem to work either. So what can the problem be? If needed I can re-add this code on my Domain The login page has the same issue, nothing happens when the button is pressed beside emptying the fields.

    Read the article

  • NVIDIA 560 TI driver install ubuntu 14.04 leads to "missing on display" error

    - by allthosemiles
    Currently on my Ubuntu 14.04 install, I boot it up install the latest updates and attempt to install the NVIDIA drivers from xorg-edgers while following the top answer on this post Installing Nvidia Drivers It installs 304 for my card and when I check the "glxinfo | grep OpenGL" I have about 8 lines that read Xlib: extension "NV-GLX" missing on display ":0". I did the "sudo apt-get install nvidia-current" to get the latest. I install it but didn't see any errors in the terminal. I'm not sure what I'm doing wrong here. I'm not a complete beginner with Linux so I can find my way around just fine but I can't find a solution to this issue. Should I have done one of these two instead? sudo apt-get install nvidia-304 sudo apt-get install nvidia-graphics-drivers-304

    Read the article

  • Open Source Bulletin Board with Facebook Group Integration

    - by Brian
    I'm working on a an open-source community-oriented project which needs a highly social component where users can post discussion topics and questions and interact with each other. It would be ideal to facilitate discussion seamlessly between a bulletin board and Facebook. Has anyone seen such an integration? I'm talking about something that goes beyond a simple FB OAuth and actually synchronizes both forum posts / topics / OAuth / comments. Pretty please if a moderator is going to delete this tell me which StackExchange forum is the appropriate place for posting such an inquiry. :)

    Read the article

  • Mobile Identity Management at SuperValu

    - by Tanu Sood
    While organizations are fast embracing BYOD (Bring Your Own Device) culture to attract and retain best talent, improve productivity, bring agility and drive down costs, SuperValu coined their own term (and trend): TYDH – Take Your Device Home. Yes, SuperValu, a Minn based, 18,000 employees strong, food retailer handed out 2,200 iPads to store directors at locations across the country. The motivation behind this reverse trend? Phillip Black, Director of Identity & Access Management at SuperValu, shared the reasoning behind this trend in his talk at last week’s Oracle OpenWorld 2012. "It gives them productivity tools to better manage their store," says Black. Intrigued? Find out more in this recently published news article. And learn more about Oracle Identity Management 11gR2 mobile- and social- ready sign-on features today. Additional Resources: Press Release: Oracle announces Identity Management 11g Release 2 On-Demand webcast: Identity Management 11gR2 Launch Oracle Magazine: Security on the Move Website: Oracle Identity Management Blog Post: Mobile and Social Sign-on with Oracle Access Management

    Read the article

  • 3d vertex translated onto 2d viewport

    - by Dan Leidal
    I have a spherical world defined by simple trigonometric functions to create triangles that are relatively similar in size and shape throughout. What I want to be able to do is use mouse input to target a range of vertices in the area around the mouse click in order to manipulate these vertices in real time. I read a post on this forum regarding translating 3d world coordinates into the 2d viewport.. it recommended that you should multiply the world vector coordinates by the viewport and then the projection, but they didn't include any code examples, and suffice to say i couldn't get any good results. Further information.. I am using a lookat method for the viewport. Does this cause a problem, and if so is there a solution? If this isn't the problem, does anyone have a simple code example illustrating translating one vertex in a 3d world into a 2d viewspace? I am using XNA.

    Read the article

  • Opening images sorted by modification date/size/type/etc

    - by menino bolinho
    Suppose I have a folder with pictures in them. If I sort them by name once I open one with Image Viewer and navigate to the others the order is respected. But if I sort my files by modification date, for example, I can't do that. Basically, the default Image Viewer only lets you navigate images by name. According to this post on the ubuntuforums this has been an issue since 2007! Is there a good/easy way to fix it? Seems like such a trivial thing to me.

    Read the article

  • sysctl.conf ignore net settings

    - by Steffen Unland
    I have a little problem with sysctl on a Ubuntu 10.04 LTS system. When I set the sysctl values with "sysctl -w " all work fine, but when I try to use the sysctl.conf file. the net settings will be ignored. For example my sysctl.conf # /etc/sysctl.conf - Configuration file for setting system variables kernel.domainname=findme.sysctl # Corefiles information fs.suid_dumpable=2 kernel.core_pattern=/cores/core-%e-%s-%u-%g-%p-%t ##############################################################3 # Functions previously found in netbase net.ipv4.netfilter.ip_conntrack_tcp_timeout_fin_wait=1 net.ipv4.netfilter.ip_conntrack_tcp_timeout_close_wait=1 when I grep to the values, I can see that the sysctl settings for net.ipv4.netfilter don't set. [host:~ ] $ sysctl -a | grep domainname kernel.domainname = findme.sysctl [host:~ ] $ sysctl -a | grep "core_pattern" kernel.core_pattern = /cores/core-%e-%s-%u-%g-%p-%t [host:~ ] $ sysctl -a | grep "timeout_fin_wait" net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120 net.ipv4.netfilter.ip_conntrack_tcp_timeout_fin_wait = 120 [host:~ ] $ sysctl -a | grep "timeout_close_wait" net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60 net.ipv4.netfilter.ip_conntrack_tcp_timeout_close_wait = 60 can somebody help me to solve the problem? If you need more information I can post it. Cheers, Steffen

    Read the article

< Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >