Search Results

Search found 12829 results on 514 pages for 'crawl errors'.

Page 83/514 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Oracle SOA Suite for healthcare integration Dashboard

    - by Nitesh Jain
    Oracle SOA Suite Healthcare came up with a new way of monitoring where user can configure a dashboard and follow the dynamic runtime changes.Oracle SOA Suite for healthcare integration dashboards display information about the current health of the endpoints in a healthcare integration application. You can create and configure multiple dashboards as needed to monitor the status and volume metrics for the endpoints you have defined. The Dashboards reflects changes that occur in the runtime repository, such as purging runtime instance data, new messages processed, and new error messages. You can display data for various time periods, and you can manually refresh the data in real time or set the dashboard to automatically refresh at set intervals.Dashboard shows the following information: Status: The current status of the endpoint, such as Running, Idle, Disabled, or Errors. Messages Sent: The number of messages sent by the endpoint in the specified time period. Messages Received: The number of messages received by the endpoint in the specified time period. Errors: The number of messages with errors for the endpoint in the given time period. Last Sent: The date and time the last message was sent from the endpoint. Last Received: The date and time the last message was received from the endpoint. Last Error: The date and time of the last error for the endpoint.  It also shows the detailed view of a specific Endpoint The document type. The number of messages received per second. The total number of message processed in the specified time period. The average size of each message.

    Read the article

  • Terminal closing itself after 14.04 upgrade

    - by David
    All was fine in 12.04, in this case I'm using virtualbox in Windows. Last days the warning message about my Ubuntu version no longer being supported was coming up pretty often, so, yesterday I finally decided to upgrade. The upgrading process ran ok, no errors, no warnings. After rebooting the errors started to happen. Just after booting up there were some errors about video, gnome, and video textures (sorry I didn't care in that moment so I don't remember well). Luckly that went away after installing VirtualBox additions. But the big problem here is that I can't use the terminal. It opens Ok when pressing control+alt+t, but most of the commands cause instant closing. For example, df, ls, mv, cd... usually work, although it has closed few times. But 'find' causes instant close. 'apt-get' update kills it too, just after it gets the package list from the sources, when it starts processing them. I've tried xterm, everything works and I have none of that problems. I have tried reinstalling konsole, bash-static, bash-completion, but nothing worked. I have no idea what to do as there is no error message to search for the cause. It seems something related to bash, but that's all I know.

    Read the article

  • How to know why my cd is not booting?

    - by Tom Brito
    I have downloaded Ubuntu 10.10 and burned the ISO but it will not boot. I discarded problems with the ISO, as I've downloaded from the official website with no errors, and burned it with no errors. I discarded problems with the burning, as looks like it was recorded with no errors here and later in another computer. I discarded problems with my DVD reader as other cds boots fine. I'm currently using Ubuntu 9.10, I know I can upgrade via internet, but I have this same problem with my Windows XP cd, so I really would like to discover what's going on here.. My Ubuntu 9.10 cd boots just right, but the new one not. What else can be? Or what more precise tests can I make to discover where's the problem? --More info What happens when I try to boot with the Ubuntu 10.10 cd is that it behavior like there's no bootable cd in the drive. It just don't find the boot on the cd, and start the HD system. My notebook is an Amazon PC Intel Celeron 1.5 with 2Gb memory, a DVD-RW driver, HD samsung with 260GB.

    Read the article

  • Enterprise Instrumentation: The 'sessionName' parameter of value 'TraceSession' is not valid

    - by Michael Freidgeim
    We are still using Enterprise Instrumentation(that was created during .Net 1.1 time)In new Server 2008 environment and IIS 7 we have the following errors:The 'sessionName' parameter of value 'TraceSession' is not valid. A trace session of this name does not exist in the TraceSessions configuration file for Windows Trace Session Manager service. Ensure that a session of this name exists in the TraceSessions configuration file and that the Windows Trace Session Manager service is started.   at Microsoft.EnterpriseInstrumentation.EventSinks.TraceEventSink..ctor(IDictionary parameters, EventSource eventSource)   --- End of inner exception stack trace ---   at System.RuntimeMethodHandle._InvokeConstructor(IRuntimeMethodInfo method, Object[] args, SignatureStruct& signature, RuntimeType declaringType)   at System.Reflection.RuntimeConstructorInfo.Invoke(BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)   at System.RuntimeType.CreateInstanceImpl(BindingFlags bindingAttr, Binder binder, Object[] args, CultureInfo culture, Object[] activationAttributes)   at Microsoft.EnterpriseInstrumentation.EventSinks.EventSink.CreateNewEventSinks(DataRow[] eventSinkRows, EventSource eventSource)I’ve seen the same errors on development Win7 machines when using IIS. It seems not a problem on Cassini.I've checked ,that Windows Trace Session Manager Service has started and The file C:\Program Files (x86)\Microsoft Enterprise Instrumentation\Bin\Trace Service\TraceSessions.config has corresponding entry<?xml version="1.0" encoding="utf-8" ?><configuration >                <defaultParameters minBuffers="4" maxFileSize="10" maxBuffers="25" bufferSize="20" logFileMode="sequential" flushTimer="3" />                <sessionList>                                 <session name="TraceSession" enabled="false" fileName="C:\Program Files (x86)\Microsoft Enterprise Instrumentation\Bin\Trace Service\Logs\TraceLog.log" />                </sessionList></configuration>The errors still continue, but I was able to disable  the parameter in  eventSink configuration   <eventSink name=" traceSink" description=" Outputs events to the Windows Event Trace." type ="Microsoft.EnterpriseInstrumentation.EventSinks.TraceEventSink ">                <!-- MNF disabled parameter to  avoid error "The 'sessionName' parameter of value 'TraceSession' is not valid"                      < parameter name ="sessionName " value ="TraceSession " />                    -->    </ eventSink>Related old post http://bytes.com/topic/net/answers/104761-enterprise-instrumentation-windows-trace-session-managerOne day I wish to replace all EnterpriseInstrumentation calls with NLog.

    Read the article

  • Error while installing Komparator4

    - by Lucio
    I downloaded Komparator source from this page. The INSTALL file in the source say the following: Unpack komparator4-xxx.tar.bz2, and open a shell inside this directory mkdir build cd build cmake -DCMAKE_INSTALL_PREFIX=`kde4-config --prefix` .. make sudo make install I unpacked the file, make the directory, entered this, but when I have tried to cmake (sentece Nº3) the terminal print the following errors disabling me to make & install: CMake Error at /usr/share/cmake-2.8/Modules/FindKDE4.cmake:98 (MESSAGE): ERROR: cmake/modules/FindKDE4Internal.cmake not found in /home/lucio/.kde/share/apps;/usr/share/kde4/apps Call Stack (most recent call first): CMakeLists.txt:2 (find_package) CMake Warning (dev) in CMakeLists.txt: No cmake_minimum_required command is present. A line of code such as cmake_minimum_required(VERSION 2.8) should be added at the top of the file. The version specified may be lower if you wish to support older CMake versions for this project. For more information run "cmake --help-policy CMP0000". This warning is for project developers. Use -Wno-dev to suppress it. -- Configuring incomplete, errors occurred! What mean this errors and how can I fix it?

    Read the article

  • Why does my root filesystem keep becoming read-only?

    - by Scott Severance
    I've lately been having an issue with my root filesystem becoming readonly. It happens some amount of time after boot. I don't know exactly when it happens, as I don't usually notice it until something such as suspending the computer or printing fails. It seems to be fairly random. Since most of my system is on that partition, I can't re-mount it without rebooting. After this happens, the system runs a fsck. Sometimes it prompts to fix problems; other times it apparently finds none. To troubleshoot, I've searched through the logs but found nothing relevant. This might be due in part to not knowing when the actual errors took place. The filesystem is apparently good to begin with, as when fsck runs its fixes it doesn't report any errors. I've scanned the disk with SpinRite. A while ago, SpinRite found and recovered from some bad sectors on the hard drive. I ran a level 4 scan (a thorough scan) after this probem appeared, but SpinRite found nothing. The SMART data reports that the disk is OK with 63 bad sectors. The number of bad sectors hasn't changed recently. I realize that the disk isn't in the best of conditions, and I have complete backups in case of catastrophic failure. Yet the lack of errors in the logs, combined with SpinRite's test results and the unchanged SMART data makes me think that this problem has some cause other than disk failure. Other than disk failure, what could cause my symptoms?

    Read the article

  • Share home directory between Linux and Windows dual boot

    - by user877329
    This question is somewhat similar to How to use Windows Share has home directory, but in this case Windows is not running. I have installed a dual-boot configuration with Ubuntu 12.04 and Windows. My Windows partition is mounted on /C. Now I want either Ubuntu to locate home directories in /C/Users Which is the location of windows accounts or I want Windows to use D:\home for home directories. (D is the name of the Ubuntu root directory). For the first approach, I have managed to create a test user account test-user:x:1004:1001:Test:/C/Users/test-user:/bin/bash The account works but test-user cannot run any X session. From .xsession-errors chmod: Changing rights on ”/C/Users/test-user/.xsession-errors”: Operation not permitted Would it help get rid of that chmod, which has no effect? How do I? If I use the second approach, I need the Ext2fsd driver, which seems to work, but I am not sure if Windows maps the Ext2 system that early. Here is my fstab proc /proc proc nodev,noexec,nosuid 0 0 UUID=e7cef061-ed8d-4a82-b708-0c8f4c6f297f / ext3 errors=remount-ro 0 1 UUID=2CDCEB43DCEB0644 /C ntfs defaults,umask=007,gid=46 0 0 UUID=b087b5c0-b4bd-47e7-8d34-48ad9b192328 none swap sw 0 0 Update: I found something here: http://www.tuxera.com/community/ntfs-3g-advanced/ Will work if i do a correct mapping between NT users and Linux users.

    Read the article

  • Floating point undesirable in highly critical code?

    - by Kirt Undercoffer
    Question 11 in the Software Quality section of "IEEE Computer Society Real-World Software Engineering Problems", Naveda, Seidman, lists fp computation as undesirable because "the accuracy of the computations cannot be guaranteed". This is in the context of computing acceleration for an emergency braking system for a high speed train. This thinking seems to be invoking possible errors in small differences between measurements of a moving object but small differences at slow speeds aren't a problem (or shouldn't be), small differences between two measurements at high speed are irrelevant - can there be a problem with small roundoff errors during deceleration for an emergency braking system? This problem has been observed with airplane braking systems resulting in hydroplaning but could this actually happen in the context of a high speed train? The concern about fp errors seems to not be well-founded in this context. Any insight? The fp is used for acceleration so perhaps the concern is inching over a speed limit? But fp should be just fine if they use a double in whatever implementation language. The actual problem in the text states: During the inspection of the code for the emergency braking system of a new high speed train (a highly critical, real-time application), the review team identifies several characteristics of the code. Which of these characteristics are generally viewed as undesirable? The code contains three recursive functions (well that one is obvious). The computation of acceleration uses floating point arithmetic. All other computations use integer arithmetic. The code contains one linked list that uses dynamic memory allocation (second obvious problem). All inputs are checked to determine that they are within expected bounds before they are used.

    Read the article

  • Enabling GTX 570

    - by Silas
    Hello i just built up my new system: Asus Rock Z77 Extreme 4 Intel i7 3770k 16 Gb Corsair Ram Zotac Nvidia GTX 570 bequiet! 630W Power supply 120 GB SSD So after i installed UBUNTU 12.04 64 bit. It ran smoothly. I downloaded and installed all the recommended updates. After checking the Sytem details the GTX 570 didnt show up as graphics unit. so i figured i needed to download the drivers. So i did but being a complete newbie to linux i didnt succeed in installing them. (I think) Anyway after several tries and errors i shut down the PC and restarted it. Resultung in do Signal to my screen after trying to reboot and all the monitor outs with no result i took out the graphic card and now it boots normally but after booting it says there is a problem with the system the graphics cant be recognized something something. So Question A: What do i do? I Like linux but the arbitrarity of the Errors that occur without any changes to the system scare me. Question B: Is there A beginners guide to Ubuntu where i could start from scratch because i really want this to work? Question C: Now that the System is (suddently) showing these graphic errors So far without visible consequence despite the error message. should i reinstall the GPU and give the driver installation another try or the other way around? Ill be very grateful for any help. Thank you in advance!

    Read the article

  • Install Problems on ASUS X401A Notebook

    - by tired_of_trying
    okay... I tried many approaches to install Ubuntu 12.xxx on my new Asus notebook with varying degrees of failure... First: I'm not a newbie but I'm as frustrated as one! Install background: Install from USB DVD drive: The install went well. Re-booted machine. choose ubuntu and it errors with a MBR file error (can't remember the exact wording - something to do with missing the file. Choosing to boot W7 works fine. Install from USB Stick: Couldn't get machine to recognize the .iso Install into Oracle's Vbox: Got the boot splash screen, then hangs with a zillion errors. Note: I didn't have any problems installing ubuntu in Vbox on my iMac and it run's great. Installed using wubi: Installed fine but get errors when booting ubuntu (it doesn't find the needed wubi files). I downloaded to the C: drive and tried installing from there - no luck. For kicks: I tried running Slax Linux .iso from a USB stick and it runs fine. Some Questions: Did I use the correct .iso? (I tried 12.04.0 and 12.04.1 both 32 and 64 bit versions. I simply downloaded them from the download link and didn't use/look for an alternate version. Do I need to do something special when burning the .iso to disc? What? I did read tons of posts but, no luck with finding the solution. Any help is appreciated... thanks

    Read the article

  • Warning and error information in stored procedures revisited

    - by user13334359
    Originally way to handle warnings and errors in MySQL stored routine was designed as follows: if warning was generated during stored routine execution which has a handler for such a warning/error, MySQL remembered the handler, ignored the warning and continued execution after routine is executed MySQL checked if there is a remembered handler and activated if any This logic was not ideal and causes several problems, particularly: it was not possible to choose right handler for an instruction which generated several warnings or errors, because only first one was chosen handling conditions in current scope messed with conditions in different there were no generated warning/errors in Diagnostic Area that is against SQL Standard. First try to fix this was done in version 5.5. Patch left Diagnostic Area intact after stored routine execution, but cleared it in the beginning of each statement which can generate warnings or to work with tables. Diagnostic Area checked after stored routine execution.This patch solved issue with order of condition handlers, but lead to new issues. Most popular was that outer stored routine could see warnings which should be already handled by handler inside inner stored routine, although latest has handler. I even had to wrote a blog post about it.And now I am happy to announce this behaviour changed third time.Since version 5.6 Diagnostic Area cleared after instruction leaves its handler.This lead to that only one handler will see condition it is supposed to proceed and in proper order. All past problems are solved.I am happy that my old blog post describing weird behaviour in version 5.5 is not true any more.

    Read the article

  • What's the best way to handle numerous recurring log entries in game loop?

    - by Kaa
    I have a custom logging system, use of which is scattered all over the engine and game. The system is linked to a "LogStore" that has an std::vector<string> logs[NUM_LOG_TYPES] - each vector corresponds with it's log type (info, error, debug, etc.). There's one extra std::vector that has "coordinates" to all log entries in the order they were received. Now, all the logging output is also displayed inside my development console in the game. The game console is handled by HTML-type GUI and therefore requires a new <p> element being added for each log output. My problem is that the log entries that are generated in the main loop each frame freeze the engine, because they continue to add elements to the in-game console, and if the console or guy generates a warning - that creates an infinite logging loop. I want to solve it by handling the recurring log entries in an elegant way that lets you know that something is critically wrong, but won't freeze the engine - like displaying the count of errors in the last 60 frames instead of displaying errors themselves. But how do you guys handle this? Does anyone know any nifty tricks to do this? I understand the question may sound vague, but if someone came across this type of issue I'm sure they would know exactly what's happening. Example problematic log entries: OpenGL warnings (I actually do check for errors every frame in many places) Really any prints anywhere in the main loop (may be debugging, may be warnings)

    Read the article

  • Crawling not working windows2008

    - by axtolf
    Hi, We installed a new MOSS 2007 farm on windows 2008 SP2 enviroment. We used SQL2008 too. Configuration is 1 index, 1 FE and 1 server with 2008, all on ESX 4.0. All the Service that need it uses a dedicated user, so search has a dedicated user. Installation went well and we found no problem. We installed an SP1 MOSS from a ISO and after we upgraded WSS and MOSS to SP2. We installed the Italian language pack too and patched it to SP2. We created a new SSP. We created a web application and created a root website under it. The problem is that we can't male crawling work in any way. Seems that crawling is not able to reach the web application that we want to crawl. In event viewer of the index we have this error when we try to crawl it: The start address <h..p://name.domain.it:81 cannot be crawled. Context: Application 'SSP1', Catalog 'Portal_Content' Details: The object was not found. (0x80041201) The log of crawling from the search admin, only says: h..p://name.domani.it:81 The object was not found. (The item was deleted because it was either not found or the crawler was denied access to it.) The domain is fully accessible from everywhere using both farm admin user or the search user that we are using for service to run. Site is fully accessible from the index and seem not have problem. Inside the we application we created a root site collection with a couple of file. The log of the farm simply says.... nothing! When we ask to do a full crawl of the site, it runs for a second and after we have the errors that I wrote above. But the farm's log says nothing. Any suggestion or help is really appreciated since we are losing a lot of time on it and really we do not have any idea of what's wrong about this farm.

    Read the article

  • SwingWorker in Java (beginner question)

    - by Malachi
    I am relatively new to multi-threading and want to execute a background task using a Swingworker thread - the method that is called does not actually return anything but I would like to be notified when it has completed. The code I have so far doesn't appear to be working: private void crawl(ActionEvent evt) { try { SwingWorker<Void, Void> crawler = new SwingWorker<Void, Void>() { @Override protected Void doInBackground() throws Exception { Discoverer discover = new Discoverer(); discover.crawl(); return null; } @Override protected void done() { JOptionPane.showMessageDialog(jfThis, "Finished Crawling", "Success", JOptionPane.INFORMATION_MESSAGE); } }; crawler.execute(); } catch (Exception ex) { JOptionPane.showMessageDialog(this, ex.getMessage(), "Exception", JOptionPane.ERROR_MESSAGE); } } Any feedback/advice would be greatly appreciated as multi-threading is a big area of programming that I am weak in.

    Read the article

  • Can EC2 instances be set up to come from different IP ranges?

    - by Joshua Frank
    I need to run a web crawler and I want to do it from EC2 because I want the HTTP requests to come from different IP ranges so I don't get blocked. So I thought distributing this on EC2 instances might help, but I can't find any information about what the outbound IP range will be. I don't want to go to the trouble of figuring out the extra complexity of EC2 and distributed data, only to find that all the instances use the same address block and I get blocked by the server anyway. NOTE: This isn't for a DoS attack or anything. I'm trying to harvest data for a legitimate business purpose, I'm respecting robots.txt, and I'm only making one request per second, but the host is still shutting me down. Edit: Commenter Paul Dixon suggests that the act of blocking even my modest crawl indicates that the host doesn't want me to crawl them and therefore that I shouldn't do it (even assuming I can work around the blocking). Do people agree with this?

    Read the article

  • Best way to store data for Greasemonkey based crawler?

    - by Björn
    I want to crawl a site with Greasemonkey and wonder if there is a better way to temporarily store values than with GM_setValue. What I want to do is crawl my contacts in a social network and extract the Twitter URLs from their profile pages. My current plan is to open each profile in it's own tab, so that it looks more like a normal browsing person (ie css, scrits and images will be loaded by the browser). Then store the Twitter URL with GM_setValue. Once all profile pages have been crawled, create a page using the stored values. I am not so happy with the storage option, though. Maybe there is a better way? I have considered inserting the user profiles into the current page so that I could all process them with the same script instance, but I am not sure if XMLHttpRequest looks indistignuishable from normal user initiated requests.

    Read the article

  • How do I add Objective C code to a FireBreath Project?

    - by jmort253
    I am writing a browser plugin for Mac OS that will place a status bar icon in the status bar, which users can use to interface with the browser plugin. I've successfully built a FireBreath 1.6 project in XCode 4.4.1, and can install it in the browser. However, FireBreath uses C++, whereas a large majority of the existing libraries for Mac OS are written in Objective C. In the /Mac/projectDef.make file, I added the Cocoa Framework and Foundation Framework, as suggested here and in other resources I've found on the Internet: target_link_libraries(${PROJECT_NAME} ${PLUGIN_INTERNAL_DEPS} ${Cocoa.framework} # added line ${Foundation.framework} # added line ) I reran prepmac.sh, expecting a new project to be created in XCode with my .mm files, and .m files; however, it seems that they're being ignored. I only see the .cpp and .h files. I added rules for those in the projectDef.make file, but it doesn't seem to make a difference: file (GLOB PLATFORM RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} Mac/[^.]*.cpp Mac/[^.]*.h Mac/[^.]*.m #added by me Mac/[^.]*.mm #added by me Mac/[^.]*.cmake ) Even if I add the files in manually, I get a series of compilation errors. There are about 20 of them, all related to the file NSObjRuntime.h file: Parse Issue - Expected unqualified-id Parse Issue - Unknown type name 'NSString' Semantic Issue - Use of undeclared identifier 'NSString' Parse Issue - Unknown type name 'NSString' ... ... Semantic Issue - Use of undeclared identifier 'aSelectorName' ... ... Semantic Issue - Use of undeclared identifier 'aClassName' ... It continues like this for some time with similar errors... From what I've read, these errors appear because of dependencies on the Foundation Framework, which I believe I've included in the project. I also tried clicking the project in XCode I'm to the point now where I'm not sure what to try next. People say it's not hard to use Objective C in C/C++ code, but being new to XCode and Objective C might contribute to my confusion. This is only day 4 for me in this new platform. What do I need to do to get XCode to compile the Objective C code? Please remember that I'm a little new to this, so I'd appreciate it if you leave detailed answers as opposed to the vague one-liners that are common in the firebreath tag. I'm just a little in over my head, but if you can get me past this hurdle I'm certain I'll be good to go from there. UPDATE: I edited projects/MyPlugin/CMakeLists.txt and added in the .m and .mm rules there too. after running prepmac.sh, the files are included in the project, but I still get the same compile errors. I moved all the .h files and .mm files from the Obj C code to the MyPlugin root folder and reran the prepmac.sh file. Problem still exists. Same compile errors.

    Read the article

  • Writing a PHP web crawler using cron

    - by Horse
    Hi all I have written myself a web crawler using simplehtmldom, and have got the crawl process working quite nicely. It crawls the start page, adds all links into a database table, sets a session pointer, and meta refreshes the page to carry onto the next page. That keeps going until it runs out of links That works fine however obviously the crawl time for larger websites is pretty tedious. I wanted to be able to speed things up a bit though, and possibly make it a cron job. Any ideas on making it as quick and efficient as possible other than setting the memory limit / execution time higher?

    Read the article

  • guide on crawling the entire web ?

    - by bohohasdhfasdf
    i just had this thought, and was wondering if it's possible to crawl the entire web (just like the big boys!) on a single dedicated server (like Core2Duo, 8gig ram, 750gb disk 100mbps) . I've come across a paper where this was done....but i cannot recall this paper's title. it was like about crawling the entire web on a single dedicated server using some statistical model. Anyways, imagine starting with just around 10,000 seed URLs, and doing exhaustive crawl.... is it possible ? I am in need of crawling the web but limited to a dedicated server. how can i do this, is there an open source solution out there already ? for example see this real time search engine. http://crawlrapidshare.com the results are exteremely good and freshly updated....how are they doing this ?

    Read the article

  • Open Source PHP search engine

    - by Ravi Gupta
    I am looking for an open source search engine plugin written in php for my website(eCommerce). Before anybody answer that I have a doubt regarding the search engine. Usually search engine crawl web pages, create indexes and then use them while looking for contents. But will the same model work for eCommerce websites? Yeah, it can crawl products pages, index them but don't you think it would be better if it crawls the database directly and index the products stored in the database? And when a user search for any product, it will simply give us the rows of the table which matches the user query? May be what I am asking is a stupid question but I am new to web development, so kindly help me to understand the concept. I have looked at a search engine called Sphider but didn't get what all I have to do to make it work with an eCommerce website.

    Read the article

  • Registration form validation not validating

    - by jgray
    I am a noob when it comes to web development. I am trying to validate a registration form and to me it looks right but it will not validate.. This is what i have so far and i am validating through a repository or database. Any help would be greatly appreciated. thanks <?php session_start(); $title = "User Registration"; $keywords = "Name, contact, phone, e-mail, registration"; $description = "user registration becoming a member."; require "partials/_html_header.php"; //require "partials/_header.php"; require "partials/_menu.php"; require "DataRepository.php"; // if all validation passed save user $db = new DataRepository(); // form validation goes here $first_nameErr = $emailErr = $passwordErr = $passwordConfirmErr = ""; $first_name = $last_name = $email = $password = $passwordConfirm = ""; if(isset($_POST['submit'])) { $valid = TRUE; // check if all fields are valid { if ($_SERVER["REQUEST_METHOD"] == "POST") { if (empty($_POST["first_name"])) {$first_nameErr = "Name is required";} else { // $first_name = test_input($_POST["first_name"]); // check if name only contains letters and whitespace if (!preg_match("/^[a-zA-Z ]*$/",$first_name)) { $first_nameErr = "Only letters and white space allowed"; } } if (empty($_POST["email"])) {$emailErr = "Email is required";} else { // $email = test_input($_POST["email"]); // check if e-mail address syntax is valid if (!preg_match("/([\w\-]+\@[\w\-]+\.[\w\-]+)/",$email)) { $emailErr = "Invalid email format"; } } if (!preg_match("/(......)/",$password)) { $passwordErr = "Subject must contain THREE or more characters!"; } if ($_POST['password']!= $_POST['passwordConfirm']) { echo("Oops! Password did not match! Try again. "); } function test_input($data) { $data = trim($data); $data = stripslashes($data); $data = htmlspecialchars($data); return $data; } } } if(!$db->isEmailUnique($_POST['email'])) { $valid = FALSE; //display errors in the correct places } // if still valid save the user if($valid) { $new_user = array( 'first_name' => $_POST['first_name'], 'last_name' => $_POST['last_name'], 'email' => $_POST['email'], 'password' => $_POST['password'] ); $results = $db->saveUser($new_user); if($results == TRUE) { header("Location: login.php"); } else { echo "WTF!"; exit; } } } ?> <head> <style> .error {color: #FF0000;} </style> </head> <h1 class="center"> World Wide Web Creations' User Registration </h1> <p><span class="error"></span><p> <form method="POST" action="<?php echo htmlspecialchars($_SERVER["PHP_SELF"]);?>" onsubmit="return validate_form()" > First Name: <input type="text" name="first_name" id="first_name" value="<?php echo $first_name;?>" /> <span class="error"> <?php echo $first_nameErr;?></span> <br /> <br /> Last Name(Optional): <input type="text" name="last_name" id="last_name" value="<?php echo $last_name;?>" /> <br /> <br /> E-mail: <input type="email" name="email" id="email" value="<?php echo $email;?>" /> <span class="error"> <?php echo $emailErr;?></span> <br /> <br /> Password: <input type="password" name="password" id="password" value="" /> <span class="error"> <?php echo $passwordErr;?></span> <br /> <br /> Confirmation Password: <input type="password" name="passwordConfirm" id="passwordConfirm" value="" /> <span class="error"> <?php echo $passwordConfirmErr;?></span> <br /> <br /> <br /> <br /> <input type="submit" name="submit" id="submit" value="Submit Data" /> <input type="reset" name="reset" id="reset" value="Reset Form" /> </form> </body> </html> <?php require "partials/_footer.php"; require "partials/_html_footer.php"; ?> class DataRepository { // version number private $version = "1.0.3"; // turn on and off debugging private static $debug = FALSE; // flag to (re)initialize db on each call private static $initialize_db = FALSE; // insert test data on initialization private static $load_default_data = TRUE; const DATAFILE = "203data.txt"; private $data = NULL; private $errors = array(); private $user_fields = array( 'id' => array('required' => 0), 'created_at' => array('required' => 0), 'updated_at' => array('required' => 0), 'first_name' => array('required' => 1), 'last_name' => array('required' => 0), 'email' => array('required' => 1), 'password' => array('required' => 1), 'level' => array('required' => 0, 'default' => 2), ); private $post_fields = array( 'id' => array('required' => 0), 'created_at' => array('required' => 0), 'updated_at' => array('required' => 0), 'user_id' => array('required' => 1), 'title' => array('required' => 1), 'message' => array('required' => 1), 'private' => array('required' => 0, 'default' => 0), ); private $default_user = array( 'id' => 1, 'created_at' => '2013-01-01 00:00:00', 'updated_at' => '2013-01-01 00:00:00', 'first_name' => 'Admin Joe', 'last_name' => 'Tester', 'email' => '[email protected]', 'password' => 'a94a8fe5ccb19ba61c4c0873d391e987982fbbd3', 'level' => 1, ); private $default_post = array( 'id' => 1, 'created_at' => '2013-01-01 00:00:00', 'updated_at' => '2013-01-01 00:00:00', 'user_id' => 1, 'title' => 'My First Post', 'message' => 'This is the message of the first post.', 'private' => 0, ); // constructor will load existing data into memory // if it does not exist it will create it and initialize if desired public function __construct() { // check if need to reset if(DataRepository::$initialize_db AND file_exists(DataRepository::DATAFILE)) { unlink(DataRepository::DATAFILE); } // if file doesn't exist, create the initial datafile if(!file_exists(DataRepository::DATAFILE)) { $this->log("Data file does not exist. Attempting to create it... (".__FUNCTION__.":".__LINE__.")"); // create initial file $this->data = array( 'users' => array( ), 'posts' => array() ); // load default data if needed if(DataRepository::$load_default_data) { $this->data['users'][1] = $this->default_user; $this->data['posts'][1] = $this->default_post; } $this->writeTheData(); } // load the data into memory for use $this->loadTheData(); } private function showErrors($break = TRUE, $type = NULL) { if(count($this->errors) > 0) { echo "<div style=\"color:red;font-weight: bold;font-size: 1.3em\":<h3>$type Errors</h3><ol>"; foreach($this->errors AS $error) { echo "<li>$error</li>"; } echo "</ol></div>"; if($break) { "</br></br></br>Exiting because of errors!"; exit; } } } private function writeTheData() { $this->log("Attempting to write the datafile: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")"); file_put_contents(DataRepository::DATAFILE, json_encode($this->data)); $this->log("Datafile written: ".DataRepository::DATAFILE." (line: ".__LINE__.")"); } private function loadTheData() { $this->log("Attempting to load the datafile: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")"); $this->data = json_decode(file_get_contents(DataRepository::DATAFILE), true); $this->log("Datafile loaded: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")", $this->data); } private function validateFields(&$info, $fields, $pre_errors = NULL) { // merge in any pre_errors if($pre_errors != NULL) { $this->errors = array_merge($this->errors, $pre_errors); } // check all required fields foreach($fields AS $field => $reqs) { if(isset($reqs['required']) AND $reqs['required'] == 1) { if(!isset($info[$field]) OR strlen($info[$field]) == 0) { $this->errors[] = "$field is a REQUIRED field"; } } // set any default values if not present if(isset($reqs['default']) AND (!isset($info[$field]) OR $info[$field] == "")) { $info[$field] = $reqs['default']; } } $this->showErrors(); if(count($this->errors) == 0) { return TRUE; } else { return FALSE; } } private function validateUser(&$user_info) { // check if the email is already in use $this->log("About to check pre_errors: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")", $user_info); $pre_errors = NULL; if(isset($user_info['email'])) { if(!$this->isEmailUnique($user_info['email'])) { $pre_errors = array('The email: '.$user_info['email'].' is already used in our system'); } } $this->log("After pre_error check: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")", $pre_errors); return $this->validateFields($user_info, $this->user_fields, $pre_errors); } private function validatePost(&$post_info) { // check if the user_id in the post actually exists $this->log("About to check pre_errors: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")", $post_info); $pre_errors = NULL; if(isset($post_info['user_id'])) { if(!isset($this->data['users'][$post_info['user_id']])) { $pre_errors = array('The posts must belong to a valid user. (User '.$post_info['user_id'].' does not exist in the data'); } } $this->log("After pre_error check: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")", $pre_errors); return $this->validateFields($post_info, $this->post_fields, $pre_errors); } private function log($message, $data = NULL) { $style = "background-color: #F8F8F8; border: 1px solid #DDDDDD; border-radius: 3px; font-size: 13px; line-height: 19px; overflow: auto; padding: 6px 10px;"; if(DataRepository::$debug) { if($data != NULL) { $dump = "<div style=\"$style\"><pre>".json_encode($data, JSON_PRETTY_PRINT)."</pre></div>"; } else { $dump = NULL; } echo "<code><b>Debug:</b> $message</code>$dump<br />"; } } public function saveUser($user_info) { $this->log("Entering saveUser: (".__FUNCTION__.":".__LINE__.")", $user_info); $mydata = array(); $update = FALSE; // check for existing data if(isset($user_info['id']) AND $this->data['users'][$user_info['id']]) { $mydata = $this->data['users'][$user_info['id']]; $this->log("Loaded prior user: ".print_r($mydata, TRUE)." (".__FUNCTION__.":".__LINE__.")"); } // copy over existing values $this->log("Before copying over existing values: (".__FUNCTION__.":".__LINE__.")", $mydata); foreach($user_info AS $k => $v) { $mydata[$k] = $user_info[$k]; } $this->log("After copying over existing values: (".__FUNCTION__.":".__LINE__.")", $mydata); // check required fields if($this->validateUser($mydata)) { // hash password if new if(isset($mydata['password'])) { $mydata['password'] = sha1($mydata['password']); } // if no id, add the next available one if(!isset($mydata['id']) OR (int)$mydata['id'] < 1) { $this->log("No id set: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")"); if(count($this->data['users']) == 0) { $mydata['id'] = 1; $this->log("Setting id to 1: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")"); } else { $mydata['id'] = max(array_keys($this->data['users']))+1; $this->log("Found max id and added 1 [".$mydata['id']."]: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")"); } } // set created date if null if(!isset($mydata['created_at'])) { $mydata['created_at'] = date ("Y-m-d H:i:s", time()); } // update modified time $mydata['modified_at'] = date ("Y-m-d H:i:s", time()); // copy into data and save $this->log("Before data save: (".__FUNCTION__.":".__LINE__.")", $this->data); $this->data['users'][$mydata['id']] = $mydata; $this->writeTheData(); } return TRUE; } public function getUserById($id) { if(isset($this->data['users'][$id])) { return $this->data['users'][$id]; } else { return array(); } } public function isEmailUnique($email) { // find the user that has the right username/password foreach($this->data['users'] AS $k => $v) { $this->log("Checking unique email: {$v['email']} == $email (".__FUNCTION__.":".__LINE__.")", NULL); if($v['email'] == $email) { $this->log("FOUND NOT unique email: {$v['email']} == $email (".__FUNCTION__.":".__LINE__.")", NULL); return FALSE; break; } } $this->log("Email IS unique: $email (".__FUNCTION__.":".__LINE__.")", NULL); return TRUE; } public function login($username, $password) { // hash password for validation $password = sha1($password); $this->log("Attempting to login with $username / $password: (".__FUNCTION__.":".__LINE__.")", NULL); $user = NULL; // find the user that has the right username/password foreach($this->data['users'] AS $k => $v) { if($v['email'] == $username AND $v['password'] == $password) { $user = $v; break; } } $this->log("Exiting login: (".__FUNCTION__.":".__LINE__.")", $user); return $user; } public function savePost($post_info) { $this->log("Entering savePost: (".__FUNCTION__.":".__LINE__.")", $post_info); $mydata = array(); // check for existing data if(isset($post_info['id']) AND $this->data['posts'][$post_info['id']]) { $mydata = $this->data['posts'][$post_info['id']]; $this->log("Loaded prior posts: ".print_r($mydata, TRUE)." (".__FUNCTION__.":".__LINE__.")"); } $this->log("Before copying over existing values: (".__FUNCTION__.":".__LINE__.")", $mydata); foreach($post_info AS $k => $v) { $mydata[$k] = $post_info[$k]; } $this->log("After copying over existing values: (".__FUNCTION__.":".__LINE__.")", $mydata); // check required fields if($this->validatePost($mydata)) { // if no id, add the next available one if(!isset($mydata['id']) OR (int)$mydata['id'] < 1) { $this->log("No id set: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")"); if(count($this->data['posts']) == 0) { $mydata['id'] = 1; $this->log("Setting id to 1: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")"); } else { $mydata['id'] = max(array_keys($this->data['posts']))+1; $this->log("Found max id and added 1 [".$mydata['id']."]: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")"); } } // set created date if null if(!isset($mydata['created_at'])) { $mydata['created_at'] = date ("Y-m-d H:i:s", time()); } // update modified time $mydata['modified_at'] = date ("Y-m-d H:i:s", time()); // copy into data and save $this->data['posts'][$mydata['id']] = $mydata; $this->log("Before data save: (".__FUNCTION__.":".__LINE__.")", $this->data); $this->writeTheData(); } return TRUE; } public function getAllPosts() { return $this->loadPostsUsers($this->data['posts']); } public function loadPostsUsers($posts) { foreach($posts AS $id => $post) { $posts[$id]['user'] = $this->getUserById($post['user_id']); } return $posts; } public function dump($line_number, $temp = 'NO') { // if(DataRepository::$debug) { if($temp == 'NO') { $temp = $this->data; } echo "<pre>Dumping from line: $line_number\n"; echo json_encode($temp, JSON_PRETTY_PRINT); echo "</pre>"; } } } /* * Change Log * * 1.0.0 * - first version * 1.0.1 * - Added isEmailUnique() function for form validation and precheck on user save * 1.0.2 * - Fixed getAllPosts() to include the post's user info * - Added loadPostsUsers() to load one or more posts with their user info * 1.0.3 * - Added autoload to always add admin Joe. */

    Read the article

  • Permissions restoring from Time Machine - Finder copy vs "cp" copy

    - by Ben Challenor
    Note: this question was starting to sprawl so I rewrote it. I have a folder that I'm trying to restore from a Time Machine backup. Using cp -R works fine, but certain folders cannot be restored with either the Time Machine UI or Finder. Other users have reported similar errors and the cp -R workaround was suggested (e.g. Restoring from Time Machine - Permissions Error). But I wanted to understand: Why cp -R works when the Finder and the Time Machine UI do not. Whether I could prevent the errors by changing file permissions before the backup. There do indeed seem to be some permissions that Finder works with and some that it does not. I've narrowed the errors down to folders with the user ben (that's me) and the group wheel. Here's a simplified reproduction. I have four folders with the owner/group combinations I've seen so far: ben ~/Desktop/test $ ls -lea total 16 drwxr-xr-x 7 ben staff 238 27 Nov 14:31 . drwx------+ 17 ben staff 578 27 Nov 14:29 .. 0: group:everyone deny delete -rw-r--r--@ 1 ben staff 6148 27 Nov 14:31 .DS_Store drwxr-xr-x 3 ben staff 102 27 Nov 14:30 ben-staff drwxr-xr-x 3 ben wheel 102 27 Nov 14:30 ben-wheel drwxr-xr-x 3 root admin 102 27 Nov 14:31 root-admin drwxr-xr-x 3 root wheel 102 27 Nov 14:31 root-wheel Each contains a single file called file with the same owner/group: ben ~/Desktop/test $ cd ben-staff ben ~/Desktop/test/ben-staff $ ls -lea total 0 drwxr-xr-x 3 ben staff 102 27 Nov 14:30 . drwxr-xr-x 7 ben staff 238 27 Nov 14:31 .. -rw-r--r-- 1 ben staff 0 27 Nov 14:30 file In the backup, they look like this: ben /Volumes/Deimos/Backups.backupdb/Ben’s MacBook Air/Latest/Macintosh HD/Users/ben/Desktop/test $ ls -leA total 16 -rw-r--r--@ 1 ben staff 6148 27 Nov 14:34 .DS_Store 0: group:everyone deny write,delete,append,writeattr,writeextattr,chown drwxr-xr-x@ 3 ben staff 102 27 Nov 14:51 ben-staff 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown drwxr-xr-x@ 3 ben wheel 102 27 Nov 14:51 ben-wheel 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown drwxr-xr-x@ 3 root admin 102 27 Nov 14:52 root-admin 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown drwxr-xr-x@ 3 root wheel 102 27 Nov 14:52 root-wheel 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown Of these, ben-staff can be restored with Finder without errors. root-wheel and root-admin ask for my password and then restore without errors. But ben-wheel does not prompt for my password and gives the error: The operation can’t be completed because you don’t have permission to access “file”. Interestingly, I can restore the file from this folder by dragging it directly to my local drive (instead of dragging its parent folder), but when I do so its permissions are changed to ben/staff. Here are the permissions after the restore for the three folders that worked correctly, and the file from ben-wheel that was changed to ben/staff. ben ~/Desktop/test-restore $ ls -leA total 16 -rw-r--r--@ 1 ben staff 6148 27 Nov 14:46 .DS_Store drwxr-xr-x 3 ben staff 102 27 Nov 14:30 ben-staff -rw-r--r-- 1 ben staff 0 27 Nov 14:30 file drwxr-xr-x 3 root admin 102 27 Nov 14:31 root-admin drwxr-xr-x 3 root wheel 102 27 Nov 14:31 root-wheel Can anyone explain this behaviour? Why do Finder and the Time Machine UI break with the ben / wheel permissions? And why does cp -R work (even without sudo)?

    Read the article

  • File copying utility like rsync with error handling like ddrescue, for data recovery from a hard drive with bad sectors or hardware failure

    - by purefusion
    I have a hard drive with either bad blocks or sectors that are failing to read due to potential mechanical issues, such as a bad disk head, bad motor, or some other issue that is causing the hard drive to read data excruciatingly slowly and with lots of read errors. I'm seeing an average of 50 KB/sec, with some reads dropping below 10 KB/sec, and frequently it gets stuck on a file or sector altogether, usually for quite a long time—from 2-10 minutes or more (when using rsync, before it times out). Speed seems to vary wildly, and it gets stuck on files a lot, and when it finally gets "unstuck" it only seems to last for a short burst before it gets stuck again. The drive is also very quiet with only an occasional sound of files copying (usually when it gets stuck/unstuck for a brief time, before getting stuck again). Thus, there are none of those evil sounds that are normally associated with HDD death. Someone suggested that the problems sounded like they might be caused by a misaligned disk head, which requires a lot of re-reads before it finally reads data with success. Sounds plausible, but I digress... Anyway, the problem with rsync is that it seems to have no decent error handling support. Obviously, it wasn't meant for use in recovering data from failing hard drives, but all the so-called "data recovery" utilities out there that are meant for such use usually focus on recovery of deleted files or messed up partitions, rather than copying files off dying hard drives. Deleted file recovery is not what I need, obviously, so perhaps you can understand my disappointment in not being able to find what I'm after yet. Naturally, this is where you'd probably say "You should use ddrescue!" Well, that's all fine and dandy, but I've already got most of the data backed up, so I just want to recover certain files. I'm not concerned with trying to recover a full partition block-by-block as ddrescue does. I am only interested in rescuing just specific files and directories. Ideally, what I'd like is some sort of cross between rsync and ddrescue: something that lets me specify source and destination as directories of normal files like rsync (rather than two full partitions as ddrescue requires), with a way to skip files with errors in an initial run, and then allows me to attempt recovery of those files with errors in a later run (with a slightly altered command, of course), perhaps even offering an option to specify the number of retry attempts ...just like how ddrescue works with blocks, only I want a utility that works with specific files/directories like rsync does. So am I daydreaming here, or does something out there exist that can do this? Or, maybe even a way to make rsync or ddrescue work in such a way? I'm really open to whatever solutions might work, so long as they let me choose which files I want to "rescue", and can skip files with errors in the initial run, and try/retry those errors again later. So far I've tried rsync with the following options, but it often gets stuck on a file for longer than the timeout, and ideally I'd just like it to move on to the next file and come back later to the files it gets stuck on. I don't think that's possible though. Anyway, here's what I've been using up till now: rsync -avP --stats --block-size=512 --timeout=600 /path/to/source/* /path/to/destination/

    Read the article

  • Can't configure frame relay T1 on Cisco 1760

    - by sonar
    For the past few days I've been trying to configure a data T1 via a Frame Relay. Now I've been pretty unsuccessful at it, and it's been a while, since I've done this so please bare with me. The ISP provided me the following information: 1. IP address 2. Gateway address 3. Encapsulation Frame Relay 4. DLCI 100 5. BZ8 ESF (I think the bz8 was supposed to be b8zs) 6. Time Slot (1 al 24). And what I have configured up until now is the following: interface Serial0/0 ip address <ip address> 255.255.255.252 encapsulation frame-relay service-module t1 timeslots 1-24 frame-relay interface-dlci 100 sh service-module s0/0 (outputs): Module type is T1/fractional Hardware revision is 0.128, Software revision is 0.2, Image checksum is 0x73D70058, Protocol revision is 0.1 Receiver has no alarms. Framing is **ESF**, Line Code is **B8ZS**, Current clock source is line, Fraction has **24 timeslots** (64 Kbits/sec each), Net bandwidth is 1536 Kbits/sec. Last module self-test (done at startup): Passed Last clearing of alarm counters 00:17:17 loss of signal : 0, loss of frame : 0, AIS alarm : 0, Remote alarm : 2, last occurred 00:10:10 Module access errors : 0, Total Data (last 1 15 minute intervals): 0 Line Code Violations, 0 Path Code Violations 0 Slip Secs, 0 Fr Loss Secs, 0 Line Err Secs, 0 Degraded Mins 0 Errored Secs, 0 Bursty Err Secs, 0 Severely Err Secs, 0 Unavail Secs Data in current interval (138 seconds elapsed): 0 Line Code Violations, 0 Path Code Violations 0 Slip Secs, 0 Fr Loss Secs, 0 Line Err Secs, 0 Degraded Mins 0 Errored Secs, 0 Bursty Err Secs, 0 Severely Err Secs, 0 Unavail Secs sh int: FastEthernet0/0 is up, line protocol is up Hardware is PQUICC_FEC, address is 000d.6516.e5aa (bia 000d.6516.e5aa) Internet address is 10.0.0.1/24 MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 100Mb/s, 100BaseTX/FX ARP type: ARPA, ARP Timeout 04:00:00 Last input 00:20:00, output 00:00:00, output hang never Last clearing of "show interface" counters never Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 0 bits/sec, 0 packets/sec 0 packets input, 0 bytes Received 0 broadcasts, 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored 0 watchdog 0 input packets with dribble condition detected 191 packets output, 20676 bytes, 0 underruns 0 output errors, 0 collisions, 1 interface resets 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier 0 output buffer failures, 0 output buffers swapped out Serial0/0 is up, line protocol is down Hardware is PQUICC with Fractional T1 CSU/DSU MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation FRAME-RELAY, loopback not set Keepalive set (10 sec) LMI enq sent 157, LMI stat recvd 0, LMI upd recvd 0, DTE LMI down LMI enq recvd 23, LMI stat sent 0, LMI upd sent 0 LMI DLCI 1023 LMI type is CISCO frame relay DTE FR SVC disabled, LAPF state down Broadcast queue 0/64, broadcasts sent/dropped 2/0, interface broadcasts 0 Last input 00:24:51, output 00:00:05, output hang never Last clearing of "show interface" counters 00:27:20 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: weighted fair Output queue: 0/1000/64/0 (size/max total/threshold/drops) Conversations 0/1/256 (active/max active/max total) Reserved Conversations 0/0 (allocated/max allocated) Available Bandwidth 1152 kilobits/sec 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 0 bits/sec, 0 packets/sec 23 packets input, 302 bytes, 0 no buffer Received 0 broadcasts, 0 runts, 0 giants, 0 throttles 1725 input errors, 595 CRC, 1099 frame, 0 overrun, 0 ignored, 30 abort 246 packets output, 3974 bytes, 0 underruns 0 output errors, 0 collisions, 48 interface resets 0 output buffer failures, 0 output buffers swapped out 4 carrier transitions DCD=up DSR=up DTR=up RTS=up CTS=up Serial0/0.1 is down, line protocol is down Hardware is PQUICC with Fractional T1 CSU/DSU MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation FRAME-RELAY Last clearing of "show interface" counters never Serial0/0.100 is down, line protocol is down Hardware is PQUICC with Fractional T1 CSU/DSU Internet address is <ip address>/30 MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation FRAME-RELAY Last clearing of "show interface" counters never And everything seems to be accounted for to me, but apparently I'm missing something. My issue is that I'm stuck on interface up, line protocol down, so the T1 doesn't go up. Any ideas? Thank you,

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >