Search Results

Search found 6399 results on 256 pages for 'a record'.

Page 56/256 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Extracting a line section of mysql backup using sed

    - by carpii
    I occasionally need to extract a single record from a mysqlbackup To do this, I first extract the single table I want from the backup... sed -n -e '/CREATE TABLE.*usertext/,/CREATE TABLE/p' 20120930_backup.sql > table.sql In table.sql, the records are batched using extended inserts (with maybe 100 records per insert before it creates a new line starting with INSERT INTO), so they look like... INSERT INTO usertext VALUES (1, field2 etc), (2, field2 etc), INSERT INTO usertext VALUES (101, field2 etc), (102, field2 etc), ... Im trying to extract record 239560 from this, using... sed -n -e '/(239560.*/,/)/p' table.sql > record.sql Ie.. start streaming when it finds 239560, and stop when it hits the closing bracket But this isnt working as I hoped, it just results in the full insert batch being output. Please can someone give me some pointers as to where Im going wrong? Would I be better off using awk for extracting segments of lines, and use sed for extracting lines within a file?

    Read the article

  • Why does Postfix deliver mails locally instead of relaying them to Google Apps?

    - by user40388
    I get the following error trying to send an email to my Google Apps Email at [email protected] from my Postfix server. to=, relay=local, delay=0.09, delays=0.07/0/0/0.02, dsn=5.1.1, status=bounced (unknown user: "admin") Is there a way I can force it to not use the LOCAL relay and treat [email protected] as outside email and not look for a user in the current postfix configuration. I am trying to email the full email address "[email protected]" not only "admin". I have the Google Apps MX record on mydomain.com + SPF record which before was: v=spf1 include:_spf.google.com ~all (emailing to [email protected] used to work with that record) But I had to change it to v=spf1 a mx ip4:MY.IP.HERE include:_spf.google.com ~all

    Read the article

  • What is name server of my domain, pointing to hosted site on Github Pages?

    - by nournia
    I've got one domain name and I want to set it up for my hosted site on Github Pages service. In the documentation this point is mentioned that If you are using a top-level domain like example.com, you must use an A record pointing to 204.232.175.78. But my domain registrar doesn't permit me to add a DNS Record. But only asks me to fill one table like this: Name Server (NS Record) Server Name ......... | Server IP ns53.parsihost.com | 94.232.173.52 ns2.parsihost.com | 206.223.171.254 I asked the registrar about this problem and they said to me "You must put Github's name server in those cells". So, what is the mapping from this table to DNS Records, and what is your advice for filling this kind of table?

    Read the article

  • Incorporating Sound in UPK 3.6.1

    - by [email protected]
    UPK 3.6.1 now offers developers the ability to easily record and edit sound from right within the UPK Developer. Sound can be recorded in either the concept pane or individual topic frames. A developer can record sound at the same time they're capturing a transaction or by adding sound after recording, on a frame-by-frame basis. The sound editor in UPK 3.6.1 allows developers to perform a variety of editing functions: play, insert sound or silence, delete, adjust amplification, and import or export sound files, just to name a few. Internally, Oracle Product Management is using this functionality to create "UPK-casts" for enablement purposes. We do this by capturing PowerPoint slides, then adding sound, allowing us to create our own recorded "webcasts". Because we develop these independently, we control the content and have more flexibility to edit the content as needed. Whether it's a change to a single frame or an entire topic, we can react quickly, providing our users with the most up-to-date information. And you don't need expensive equipment or a sound studio to achieve good sound quality. Depending on how your end users are accessing your content, a $35 head set can do the trick. Just be sure to follow the best practices for sound recording as outlined in the UPK documentation. Tip: we've found that we get the best results with sound consistency when we record all the sound for a topic at one sitting. UPK 3.6.1 is now available for download from Oracle E-Delivery. Upgrade today and have fun creating more robust, engaging content for your users! - Karen Rihs, Oracle UPK & Tutor Outbound Product Management

    Read the article

  • Felix Baumgartner Skydives from the Edge of Space [Video]

    - by Jason Fitzpatrick
    Yesterday Felix Baumgartner broke the record for highest skydive by leaping out of a capsule 128,100 feet above the Earth. Check out his jump in the following videos. After flying to an altitude of 39,045 meters (128,100 feet) in a helium-filled balloon, Felix Baumgartner completed a record breaking jump for the ages from the edge of space, exactly 65 years after Chuck Yeager first broke the sound barrier flying in an experimental rocket-powered airplane. Felix reached a maximum of speed of 1,342.8 km/h (833mph) through the near vacuum of the stratosphere before being slowed by the atmosphere later during his 4:20 minute long freefall. The 43-year-old Austrian skydiving expert also broke two other world records (highest freefall, highest manned balloon flight), leaving the one for the longest freefall to project mentor Col. Joe Kittinger. The above video is a 2 minute highlight reel of the ascent and jump; check out the full 15 minute descent video here. For an in-depth look at the technology used to keep Baumgartner safe during his record setting journey, hit up the link below. The Tech Behind Felix Baumgartner’s Stratospheric Skydive [ExtremeTech] HTG Explains: What is the Windows Page File and Should You Disable It? How To Get a Better Wireless Signal and Reduce Wireless Network Interference How To Troubleshoot Internet Connection Problems

    Read the article

  • Sending Emails via Google SMTP - after some time quit working

    - by Chris
    on a website I use PHPMailer to send automated registration emails, etc and also a newsletter-tool (which loops through the emails and sends them one by one). Also, I configured in Gmail under Settings and confirmed @mydomain addresses, so I can send from @mydomain emails without the gmail address being displayed. Furthermore I authorized the website to send mails with this link: https://accounts.google.com/DisplayUnlockCaptcha Now, after 2 month where everything worked perfectly fine, suddenly users started not to receive emails anymore and most recently emails are not even being sent anymore. Also, I received many error messages like this: Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 5.4.1 [email protected]: Recipient address rejected: Access Denied (state 13). When I check at this link: https://toolbox.googleapps.com/apps/checkmx/ It tells 2 none critical errors: Relayhost configuration detected. There SHOULD be a valid SPF record. So, the questions I would have were: does anybody have any hint why it stopped working, what the error messages mean? what to do to fix it? where do I set a SPF record (Cpanel?)? what is a relayhost and how to fix that? It is about 1000-1400 mails a day (gmail's limit is 2000). Also, what can I do wrong when setting up an SPF record? I've heard there are some testing tools for that. Thank you so much already in advance for your help!

    Read the article

  • SSIS ForEachLoop Container

    - by Leonard Mwangi
    I recently had a client request to create an SSIS package that would loop through a set of data in SQL tables to allow them to complete their data transformation processes. Knowing that Integration Services does have ForEachLoop Container, I knew the task would be easy but the moment I jumped into it I figured there was no straight forward way to accomplish the task since for each didn’t really have a loop through the table enumerator. With the capabilities of integration Services, I was still confident that it was possible it was just a matter of creativity to get it done. I set out to discover what different ForEach Loop Editor Enumerators did and settled with Variable Enumerator.  Here is how I accomplished the task. 1.       Drop your ForEach Loop Container in your WorkArea. 2.       Create a few SSIS Variable that will contain the data. Notice I have assigned MyID_ID variable a value of “TEST’ which is not evaluated either. This variable will be assigned data from the database hence allowing us to loop. 3.       In the ForEach Loop Editor’s Collection select Variable Enumerator 4.       Once this is all set, we need a mechanism to grab the data from the SQL Table and assigning it to the variable. Fig: Select Top 1 record Fig: Assign Top 1 record to the variable 5.       Now all that’s required is a house cleaning process that will update the table that you are looping so that you can be able to grab the next record   A look of the complete package

    Read the article

  • Oracle SQL Developer: Fetching SQL Statement Result Sets

    - by thatjeffsmith
    Running queries, browsing tables – you are often faced with many thousands, if not millions, of rows. Most people are happy with looking at the first few rows. But occasionally you need to see more. SQL Developer doesn’t show you all records, all at once. Instead, it brings the records down in ‘chunks,’ or as-needed. How It Works There is a preference that tells SQL Developer how many records to get in a single request, or ‘fetch’ of records. The default is 50… So if I run a query that returns MORE than 50 rows: There’s more than 50 records in this resultset, but we have 50 in the grid to start with. We don’t know how many records are in this result set actually. To show the record count here, we actually go physically query the database with a row count type query. All we know is that the query has finished executing, and that there are rows available to go fetch. It tells us when it’s done. As you scroll through the grid, if you get to record 50 and scroll more, we’ll get 50 more records. Or, you can cheat to get to the ‘bottom’ of the result set. You can ask SQL Developer to just to get all the records at once… Once all the records have been fetched, you’ll see this: All rows fetched! A word of caution There’s a reason we have the default set to 50 and not 1000. Bringing back data can get expensive and heavy. We’ve found the best performance to be found in that 50 to 200 record range.

    Read the article

  • Gmail Doesn't Like My Cron

    - by Robery Stackhouse
    You might wonder what *nix administration has to do with maker culture. Plenty, if *nix is your automation platform of choice. I am using Ubuntu, Exim, and Ruby as the supporting cast of characters for my reminder service. Being able to send yourself an email with some data or a link at a pre-selected time is a pretty handy thing to be able to do indeed. Works great for jogging my less-than-great memory. http://www.linuxsa.org.au/tips/time.html http://forum.slicehost.com/comments.php?DiscussionID=402&page=1 http://articles.slicehost.com/2007/10/24/creating-a-reverse-dns-record http://forum.slicehost.com/comments.php?DiscussionID=1900 http://www.cyberciti.biz/faq/how-to-test-or-check-reverse-dns/ After going on a huge round the world wild goose chase, I finally was told by someone on the exim-users list that my IP was in a range blocked by Spamhaus PBL. Google said I need an SPF record http://articles.slicehost.com/2008/8/8/email-setting-a-sender-policy-framework-spf-record http://old.openspf.org/wizard.html http://www.kitterman.com/spf/validate.html The version of Exim that I could get from the Ubuntu package manager didn't support DKIM. So I uninstalled Exim and installed Postfix https://help.ubuntu.com/community/UbuntuTime http://www.sendmail.org/dkim/checker

    Read the article

  • Algorithm to figure out appointment times?

    - by Rachel
    I have a weird situation where a client would like a script that automatically sets up thousands of appointments over several days. The tricky part is the appointments are for a variety of US time zones, and I need to take the consumer's local time zone into account when generating appointment dates and times for each record. Appointment Rules: Appointments should be set from 8AM to 8PM Eastern Standard Time, with breaks from 12P-2P and 4P-6P. This leaves a total of 8 hours per day available for setting appointments. Appointments should be scheduled 5 minutes apart. 8 hours of 5-minute intervals means 96 appointments per day. There will be 5 users at a time handling appointments. 96 appointments per day multiplied by 5 users equals 480, so the maximum number of appointments that can be set per day is 480. Now the tricky requirement: Appointments are restricted to 8am to 8pm in the consumer's local time zone. This means that the earliest time allowed for each appointment is different depending on the consumer's time zone: Eastern: 8A Central: 9A Mountain: 10A Pacific: 11A Alaska: 12P Hawaii or Undefined: 2P Arizona: 10A or 11A based on current Daylight Savings Time Assuming a data set can be several thousand records, and each record will contain a timezone value, is there an algorithm I could use to determine a Date and Time for every record that matches the rules above?

    Read the article

  • How to create single integer index value based on two integers where first is unlimited?

    - by Jan Doggen
    I have table data containing an integer value X ranging from 1.... unknown, and an integer value Y ranging from 1..9 The data need to be presented in order 'X then Y'. For one visual component I can set multiple index names: X;Y But for another component I need a one-dimensional integer value as index (sort order). If X were limited to an upper bound of say 100, the one-dimensional value could simply be X*100 + Y. If the one-dimensional value could have been a real, it could be X + Y/10. But if I want to keep X unlimited, is there a way to calculate a single integer 'indexing' value from X and Y? [Added] Background information: I have a Gantt/TreeList component where the tasks are ordered on a TaskIndex integer. This does not need to be a real database field, I can make it a calculated field in the underlying client dataset. My table data is e.g. as follows: ID Baseline ParentID 1 0 0 (task) 5 2 1 (baseline) 8 1 1 (baseline) 9 0 0 (task) 12 0 0 (task) 16 1 12 (baseline) Task 1 has two baselines numbered 1 and 2 (IDs 8 and 5) Task 9 has no baselines Task 12 has one baseline numbered 1 (ID 16) Baselines number 1-9 (the Y variable from my question); 0 or null identify the tasks ID's are unlimited (the X variable) The user plays with visibility of baselines, e.g. he wants to see all tasks with all baselines labeled 1. This is done by updating a filter on the table. Right now I constantly have to recalculate TaskIndex after changing the filter (looping through records). It would be nice if TaskIndex could be calculated on the fly for each record knowing only the data in the current record (I work in Delphi where a client dataset has an OnCalcFields event handler, that is triggered for each record when necessary). I have no control over the inner workings of the visual component.

    Read the article

  • Tip: Recording Non-Maximized Applications in UPK

    - by Marc Santosusso
    Have you ever wanted to record an application that would not maximize, or an application that would look strange maximized? Or perhaps your Windows Desktop has become cluttered with icons and you don't want to capture the clutter in your recordings. Here's a tip that will help: create a background for your recording. Create a blank HTML file with a black background in your favorite HTML editor. Or download this sample file: UPK_Recording_Background.html (right click to save). If you would prefer a different color background in the sample file, open it in Notepad and change “#000” to a different HTML color. Open UPK_Recording_Background.html in its own web browser window. Press F11 to make the web browser window full screen. This should give you a completely black screen. (This works great in modern versions of the most popular browsers. I successfully used Firefox 15, Chrome 22, and IE 9. Open or switch to the desired application so that it sits on top of the full screen browser window. If the application you are recording is also in a browser, it is important that it be in a separate browser window from the UPK_Recording_Background.html. Record your topic normally. The above steps create a recording background using an HTML file and a web browser. This is just one method, for instance you could do the same thing with an image editor and an image viewer with a full screen view. Now you can record a non-maximized application without a distracting background. I hope you find this to be a helpful tip. Let us know what you think in the comments.

    Read the article

  • session management: verifying a user's log-in state

    - by good_computer
    I am storing sessions in my database. Everytime a user logs in, I create a new row corresponding to the new session, generate a new session id and send it as a cookie to the browser. My session data looks something like this: { 'user_id': 1234 'user_name': 'Sam' ... } When a request comes, I check whether a cookie with a session id is sent. If it is, I fetch session data from my database (or memcache) corresponding to that session id. When the user logs out, I remove the session data from my database (and memcache), and delete the cookie from the user's browser too. Notice that in my session data, I don't have something like logged_in: true. This is because if I find a session record in the database (or memcache) I deduce that the user is logged in, and if there is no session record found, the user is not logged in. My question is: is this the right approach? Should I have a logged_in key in my session data? Is there any possibility that a session record may be present on the server where the corresponding user is actually NOT logged in? Are there any security implications in having or not having such a key?

    Read the article

  • Nested entities in Google App Engine. Do I do it right?

    - by Aleksandr Makov
    Trying to make most of the GAE Datastore entities concept, but some doubts drill my head. Say I have the model: class User(ndb.Model): email = ndb.StringProperty(indexed=True) password = ndb.StringProperty(indexed=False) first_name = ndb.StringProperty(indexed=False) last_name = ndb.StringProperty(indexed=False) created_at = ndb.DateTimeProperty(auto_now_add=True) @classmethod def key(cls, email): return ndb.Key(User, email) @classmethod def Add(cls, email, password, first_name, last_name): user = User(parent=cls.key(email), email=email, password=password, first_name=first_name, last_name=last_name) user.put() UserLogin.Record(email) class UserLogin(ndb.Model): time = ndb.DateTimeProperty(auto_now_add=True) @classmethod def Record(cls, user_email): login = UserLogin(parent=User.key(user_email)) login.put() And I need to keep track of times of successful login operations. Each time user logs in, an UserLogin.Record() method will be executed. Now the question — do I make it right? Thanks. EDIT 2 Ok, used the typed arguments, but then it raised this: Expected Key instance, got User(key=Key('User', 5418393301680128), created_at=datetime.datetime(2013, 6, 27, 10, 12, 25, 479928), email=u'[email protected]', first_name=u'First', last_name=u'Last', password=u'password'). It's clear to understand, but I don't get why the docs are misleading? They implicitly propose to use: # Set Employee as Address entity's parent directly... address = Address(parent=employee) But Model expects key. And what's worse the parent=user.key() swears that key() isn't callable. And I found out the user.key works. EDIT 1 After reading the example form the docs and trying to replicate it — I got type error: TypeError('Model constructor takes no positional arguments.'). This is the exacto code used: user = User('[email protected]', 'password', 'First', 'Last') user.put() stamp = UserLogin(parent=user) stamp.put() I understand that Model was given the wrong argument, BUT why it's in the docs?

    Read the article

  • Xorg becomes unkillable at 3AM

    - by chew socks
    Most nights, some time in the hour of 3AM my xorg process will increase to 100% cpu and gpu load will also increase to 100%. The process also becomes unkillable. I cannot sudo kill -9 it or get back control with sudo service lightdm restart. I also cannot switch to to a tty screen with ctrl + alt + f1. To reboot I have to log in with ssh, but this is not perfect because if I reboot while it is doing this my ZFS pool will fail to mount when it comes back up ( that is where my /home is ). Does anyone have any ideas as to why I can't stop and restart xorg, or even better, know why this is happening? Thanks NOTE: For anyone who comes looking for the same problem. I disabled catalyst AI and made it through the night. I've been up for 1 day 3 hours now. My record for this month is 2 days and 19 hours without a problem. My all time record is 6 days without a crash. I'll post here if it crashes again or I'm able to set a new record.

    Read the article

  • Install Base Transaction Error Troubleshooting

    - by LuciaC
    Oracle Installed Base is an item instance life cycle tracking application that facilitates enterprise-wide life cycle item management and tracking capability.In a typical process flow a sales order is created and shipped, this updates Inventory and creates a new item instance in Install Base (IB).  The Inventory update results in a record being placed in the SFM Event Queue.  If the record is successfully processed the IB tables are updated, if there is an error the record is placed in the csi_txn_errors table and the error needs to be resolved so that the IB instance can be created.It's extremely important to be proactive and monitor IB Transaction Errors regularly.  Errors cascade and can build up exponentially if not resolved. Due to this cascade effect, error records need to be considered as a whole and not individually; the root cause of any error needs to be resolved first and this may result in the subsequent errors resolving themselves. Install Base Transaction Error Diagnostic Program In the past the IBtxnerr.sql script was used to diagnose transaction errors, this is now replaced by an enhanced concurrent program version of the script. See the following note for details of how to download, install and run the concurrent program as well as details of how to interpret the results: Doc ID 1501025.1 - Install Base Transaction Error Diagnostic Program  The program provides comprehensive information about the errors found as well as links to known knowledge articles which can help to resolve the specific error. Troubleshooting Watch the replay of the 'EBS CRM: 11i and R12 Transaction Error Troubleshooting - an Overview' webcast or download the presentation PDF (go to Doc ID 1455786.1 and click on 'Archived 2011' tab).  The webcast and PDF include more information, including SQL statements that you can use to identify errors and their sources as well as recommended setup and troubleshooting tips. Refer to these notes for comprehensive information: Doc ID 1275326.1: E-Business Oracle Install Base Product Information Center Doc ID 1289858.1: Install Base Transaction Errors Master Repository Doc ID: 577978.1: Troubleshooting Install Base Errors in the Transaction Errors Processing Form  Don't forget your Install Base Community where you can ask questions to help you resolve your IB transaction errors.

    Read the article

  • Google Analytics custom variables and how are they recorded?

    - by mrtsherman
    I have been asked to add GA custom variable tracking to my company's website. The company website uses server side includes, so making modifications to the tracking code happens identically everywhere. Maintenance is therefore a headache. Also, GA takes about twenty-four hours for custom variables to start showing up in reports and that makes troubleshooting a headache. So if you have custom variables // visitor level tracking, id = 12345 _gaq.push(['_setCustomVar', 1, 'id', '12345', 1]); // page level tracking, email = [email protected] _gaq.push(['_setCustomVar', 1, 'email', '[email protected]', 1]); The marketing people want the following out of this: User visits site and we record a unique id for them. Whenever they return this id will be used in GA. User signs up for our newsletter on page X and we record their email address. Whenever they return this email address is used in GA. Now a big problem for me is that I don't use GA and the marketing people don't use custom variables. So we don't actually know how this will work. Do I want Page, Session or Visitor level tracking? What happens because the same GA code is used on every page? If they visit the email sign up form and we record the email address, but then they go somewhere else where email is nonexistent will the value get 'overwritten.' Sorry for the long question, but there are a lot of unknowns for a GA noob.

    Read the article

  • How should I implement Transaction database EJB 3.0

    - by JamesBoyZ
    In the CustomerTransactions entity, I have the following field to record what the customer bought: @ManyToMany private List<Item> listOfItemsBought; When I think more about this field, there's a chance it may not work because merchants are allowed to change item's information (e.g. price, discount, etc...). Hence, this field will not be able to record what the customer actually bought when the transaction occurred. At the moment, I can only think of 2 ways to make it work. I will record the transaction details into a String field. I feel that this way would be messy if I need to extract some information about the transaction later on. Whenever the merchant changes an item's information, I will not update directly to that item's fields. Instead, I will create another new item with all the new information and keep the old item untouched. I feel that this way is better because I can easily extract information about the transaction later on. However, the bad side is that my Item table may contain a lot of rows. I'd be very grateful if someone could give me an advice on how I should tackle this problem. UPDATE: I'd like to add more information about the current design. public class Customer implements Serializable { @OneToMany private List<CustomerTransactions> listOfTransactions; } public class CustomerTransactions implements Serializable { @ManyToMany private List<Item> listOfItemsBought; } public class Merchant implements Serializable { @OneToMany private List<Item> listOfSellingItems; }

    Read the article

  • Android - Getting audio to play through earpiece

    - by Donal Rafferty
    I currently have code that reads a recording in from the devices mic using the AudioRecord class and then playing it back out using the AudioTrack class. My problem is that when I play it out it plays vis the speaker phone. I want it to play out via the ear piece on the device. Here is my code: public class LoopProg extends Activity { boolean isRecording; //currently not used AudioManager am; int count = 0; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); am = (AudioManager) getSystemService(Context.AUDIO_SERVICE); am.setMicrophoneMute(true); while(count <= 1000000){ Record record = new Record(); record.run(); count ++; Log.d("COUNT", "Count is : " + count); } } public class Record extends Thread { static final int bufferSize = 200000; final short[] buffer = new short[bufferSize]; short[] readBuffer = new short[bufferSize]; public void run() { isRecording = true; android.os.Process.setThreadPriority (android.os.Process.THREAD_PRIORITY_URGENT_AUDIO); int buffersize = AudioRecord.getMinBufferSize(11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT); AudioRecord arec = new AudioRecord(MediaRecorder.AudioSource.MIC, 11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, buffersize); AudioTrack atrack = new AudioTrack(AudioManager.STREAM_MUSIC, 11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, buffersize, AudioTrack.MODE_STREAM); am.setRouting(AudioManager.MODE_NORMAL,1, AudioManager.STREAM_MUSIC); int ok = am.getRouting(AudioManager.ROUTE_EARPIECE); Log.d("ROUTING", "getRouting = " + ok); setVolumeControlStream(AudioManager.STREAM_VOICE_CALL); //am.setSpeakerphoneOn(true); Log.d("SPEAKERPHONE", "Is speakerphone on? : " + am.isSpeakerphoneOn()); am.setSpeakerphoneOn(false); Log.d("SPEAKERPHONE", "Is speakerphone on? : " + am.isSpeakerphoneOn()); atrack.setPlaybackRate(11025); byte[] buffer = new byte[buffersize]; arec.startRecording(); atrack.play(); while(isRecording) { arec.read(buffer, 0, buffersize); atrack.write(buffer, 0, buffer.length); } arec.stop(); atrack.stop(); isRecording = false; } } } As you can see if the code I have tried using the AudioManager class and its methods including the deprecated setRouting method and nothing works, the setSpeatPoneOn method seems to have no effect at all, neither does the routing method. Has anyone got any ideas on how to get it to play via the earpiece instead of the spaker phone?

    Read the article

  • Jini : single server with multiple clients

    - by user200340
    Hi all, I have a question about how to make multiple clients can access a single file located on server side and keep the file consistent. I have a simple PhoneBook server-client Jini program running at the moment, and server only provides some getter functions to clients, such as getName(String number), getNumber(String name) from a PhoneBook class(serializable), phonebook data are stored in a text file (phonebook.txt) at the moment. I have tried to implement some functions allowing to write a new records into the phonebook.txt file. If the writing record (name) is existing, an integer number will be added into the writing record. for example the existing phonebook.txt is .... John 01-01010101 .... if the writing record is "John 01-12345678",then "John_1 01-12345678" will be writen into phonebook.txt However, if i start with two clients A and B (on the same machine using localhost), and A tries to write "John 01-11111111", B tries to write "John 01-22222222". The early record will be overwritten later record. So, there must be something i did complete wrong. My client and server code are just like Jini HelloWorld example. My server side code is . 1. LookupDiscovery with parameter new String[]{""}; 2. DiscoveryListener for LookupDiscovery 3. registrations are saved into a HashTable 4. for every discovered lookup service, i use registrar to register the ServiceItem, ServiceItem contains a null attributeSets, a null serviceId, and a service. The client code has: 1. LookupDiscovery with parameter new String[]{""}; 2. DiscoveryListener for LookupDiscovery 3. a ServiceTemplate with null attributeSets, a null serviceId and a type, the type is the interface class. 4. for each found ServiceRegistrar, if it can find the looking for ServiceTemplate, the returned Object is cast into the type of the interface class. I have tried to google more details, and i found JavaSpace could be the one i missed. But i am still not sure about it (i only start Jini for a very short time). So any help would be greatly appreciated.

    Read the article

  • FASM vc MASM trasnlation problem in mov si, offset msg

    - by Ruben Trancoso
    hi folks, just did my first test with MASM and FASM with the same code (almos) and I falled in trouble. The only difference is that to produce just the 104 bytes I need to write to MBR in FASM I put org 7c00h and in MASM 0h. The problem is on the mov si, offset msg that in the first case transletes it to 44 7C (7c44h) and with masm translates to 44 00 (0044h)! but just when I change org 7c00h to org 0h in MASM. Otherwise it will produce the entire segment from 0 to 7dff. how do I solve it? or in short, how to make MASM produce a binary that begins at 7c00h as it first byte and subsequent jumps remain relative to 7c00h? .model TINY .code org 7c00h ; Boot entry point. Address 07c0:0000 on the computer memory xor ax, ax ; Zero out ax mov ds, ax ; Set data segment to base of RAM jmp start ; Jump to the first byte after DOS boot record data ; ---------------------------------------------------------------------- ; DOS boot record data ; ---------------------------------------------------------------------- brINT13Flag db 90h ; 0002h - 0EH for INT13 AH=42 READ brOEM db 'MSDOS5.0' ; 0003h - OEM name & DOS version (8 chars) brBPS dw 512 ; 000Bh - Bytes/sector brSPC db 1 ; 000Dh - Sectors/cluster brResCount dw 1 ; 000Eh - Reserved (boot) sectors brFATs db 2 ; 0010h - FAT copies brRootEntries dw 0E0h ; 0011h - Root directory entries brSectorCount dw 2880 ; 0013h - Sectors in volume, < 32MB brMedia db 240 ; 0015h - Media descriptor brSPF dw 9 ; 0016h - Sectors per FAT brSPH dw 18 ; 0018h - Sectors per track brHPC dw 2 ; 001Ah - Number of Heads brHidden dd 0 ; 001Ch - Hidden sectors brSectors dd 0 ; 0020h - Total number of sectors db 0 ; 0024h - Physical drive no. db 0 ; 0025h - Reserved (FAT32) db 29h ; 0026h - Extended boot record sig brSerialNum dd 404418EAh ; 0027h - Volume serial number (random) brLabel db 'OSAdventure' ; 002Bh - Volume label (11 chars) brFSID db 'FAT12 ' ; 0036h - File System ID (8 chars) ;------------------------------------------------------------------------ ; Boot code ; ---------------------------------------------------------------------- start: mov si, offset msg call showmsg hang: jmp hang msg db 'Loading...',0 showmsg: lodsb cmp al, 0 jz showmsgd push si mov bx, 0007 mov ah, 0eh int 10h pop si jmp showmsg showmsgd: retn ; ---------------------------------------------------------------------- ; Boot record signature ; ---------------------------------------------------------------------- dw 0AA55h ; Boot record signature END

    Read the article

  • SQL Server 2005 - query with case statement

    - by user329266
    Trying to put a single query together to be used eventually in a SQL Server 2005 report. I need to: Pull in all distinct records for values in the "eventid" column for a time frame - this seems to work. For each eventid referenced above, I need to search for all instances of the same eventid to see if there is another record with TaskName like 'review1%'. Again, this seems to work. This is where things get complicated: For each record where TaskName is like review1, I need to see if another record exists with the same eventid and where TaskName='End'. Utimately, I need a count of how many records have TaskName like 'review1%', and then how many have TaskName like 'review1%' AND TaskName='End'. I would think this could be accomplished by setting a new value for each record, and for the eventid, if a record exists with TaskName='End', set to 1, and if not, set to 0. The query below seems to accomplish item #1 above: SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000'))) AS T WHERE seq = 1 order by eventid And the query below seems to accomplish #2: SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000')) and TaskName like 'Review1%') AS T WHERE seq = 1 order by eventid This will bring back the eventid's that also have a TaskName='End': SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000')) and TaskName like 'Review1%') AS T WHERE seq = 1 and eventid in (Select eventid from eventrecords where TaskName = 'End') order by eventid So I've tried the following to TRY to accomplish #3: SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000')) and TaskName like 'Review1%') AS T WHERE seq = 1 and case when (eventid in (Select eventid from eventrecords where TaskName = 'End') then 1 else 0) as bit end order by eventid When I try to run this, I get: "Incorrect syntax near the keyword 'then'." Not sure what I'm doing wrong. Haven't seen any examples anywhere quite like this. I should mention that eventrecords has a primary key, but it doesn't seem to help anything when I include it, and I am not permitted to change the table. (ugh) I've received one suggestion to use a cursor and temporary table, but am not sure how badley that would bog down performance when the report is running. Thanks in advance.

    Read the article

  • Read variable-length records from a buffer - weird memory issues

    - by bsg
    Hi, I'm trying to implement an i/o intensive quicksort (C++ qsort) on a very large dataset. In the interests of speed, I'd like to read in a chunk of data at a time into a buffer and then use qsort to sort it inside the buffer. (I am currently working with text files but would like to move to binary soon.) However, my data is composed of variable-length records, and qsort needs to be told the length of the record in order to sort. Is there any way to standardize this? The only thing I could think of was rather convoluted: my program currently reads from the buffer until it hits a linefeed character ('10' in ascii), transferring each character over to another array. When it finds a linefeed (the delimiter in the input file), it fills the number of spaces remaining in the buffer for that record (record size is set to 30) with null characters. This way, I should end up with a buffer full of fixed-size records to give qsort. I know there are several problems with my approach, one being that it's just clumsy, another that the record size might conceivably be larger than 30, but is generally much less. Is there a better way of doing this? As well, my current code doesn't even work. When I debug it, it seems to be transferring characters from one buffer to the other, but when I try to print out the buffer, it contains only the first record. Here is my code: FILE *fp; unsigned char *buff; unsigned char *realbuff; FILE *inputFiles[NUM_INPUT_FILES]; buff = (unsigned char *) malloc(2048); realbuff = (unsigned char *) malloc(NUM_RECORDS * RECORD_SIZE); fp = fopen("postings0.txt", "r"); if(fp) { fread(buff, 1, 2048, fp); /*for(int i=0; i <30; i++) cout << buff[i] <<endl;*/ int y=0; int recordcounter = 0; //cout << buff; for(int i=0;i <100; i++) { if(buff[i] != char(10)) { realbuff[y] = buff[i]; y++; recordcounter++; } else { if(recordcounter < RECORD_SIZE) for(int j=recordcounter; j < RECORD_SIZE;j++) { realbuff[y] = char(0); y++; } recordcounter = 0; } } cout << realbuff <<endl; cout << buff; } else cout << "sorry"; Thank you very much, bsg

    Read the article

  • Issues with LINQ (to Entity) [adding records]

    - by Mario
    I am using LINQ to Entity in a project, where I pull a bunch of data (from the database) and organize it into a bunch of objects and save those to the database. I have not had problems writing to the db before using LINQ to Entity, but I have run into a snag with this particular one. Here's the error I get (this is the "InnerException", the exception itself is useless!): New transaction is not allowed because there are other threads running in the session. I have seen that before, when I was trying to save my changes inside a loop. In this case, the loop finishes, and it tries to make that call, only to give me the exception. Here's the current code: try { //finalResult is a list of the keys to match on for the records being pulled foreach (int i in finalResult) { var queryEff = (from eff in dbMRI.MemberEligibility where eff.Member_Key == i && eff.EffDate >= DateTime.Now select eff.EffDate).Min(); if (queryEff != null) { //Add a record to the Process table Process prRecord = new Process(); prRecord.GroupData = qa; prRecord.Member_Key = i; prRecord.ProcessDate = DateTime.Now; prRecord.RecordType = "F"; prRecord.UsernameMarkedBy = "Autocard"; prRecord.GroupsId = qa.GroupsID; prRecord.Quantity = 2; prRecord.EffectiveDate = queryEff; dbMRI.AddObject("Process", prRecord); } } dbMRI.SaveChanges(); //<-- Crashes here foreach (int i in finalResult) { var queryProc = from pro in dbMRI.Process where pro.Member_Key == i && pro.UsernameMarkedBy == "Autocard" select pro; foreach (var qp in queryProc) { Audit aud = new Audit(); aud.Member_Key = i; aud.ProcessId = qp.ProcessId; aud.MarkDate = DateTime.Now; aud.MarkedByUsername = "Autocard"; aud.GroupData = qa; dbMRI.AddObject("Audit", aud); } } dbMRI.SaveChanges(); //<-- AND here (if the first one is commented out) } catch (Exception e) { //Do Something here } Basically, I need it to insert a record, get the identity for that inserted record and insert a record into another table with the identity from the first record. Given some other constraints, it is not possible to create a FK relationship between the two (I've tried, but some other parts of the app won't allow it, AND my DBA team for whatever reason hates FK's, but that's for a different topic :)) Any ideas what might be causing this? Thank!

    Read the article

  • how to wait multiple function processing to finish

    - by user351412
    I have a problem about multiple function processing , listed as below code, the main function is btnEvalClick, I have try to use alter native 1and 2 to wait the function not move to next record before theprocessed function finish, but it does not work //private function btnEvalClick(event:Event):void { // var i:int; // for(i= 0; i < (dataArr1.length); i++) { // dispatchEvent( new FlexEvent('test') ); // callfunc1('cydatGMX'); //call function 1 // callfun2('cydatGMO'); //call function 1 // editSave(); //save record (HTTP) //## Alternative 1 //if (String(event) == 'SAVEOK') { // RecMov('next'); //move record if save = OK //} //## Alternative 2 //while (waitfc == '') // if waitfc not 'OK' continue looping //{ // z = z + 1; //} // RecMov('next'); //Move to next record to process //} //private function callfunc1(tasal:String):void { // var mySO :SharedObject; // var myDP: Array; // var i:int; // var prm:Array; // try // { // mySO = SharedObject.getLocal(tasal,'/'); // prm = mySO.data.txt.split('?'); // for(i=0; i < (prm.length - 1); i++) { // myDP = prm[i].toString().split('^'); // if ( myDP[0].toString() == String(dataArr1[dg].MatrixCDCol)){ // myDPX = myDP; // break; // } // } // } // catch (err:Error) { // Alert.show('Limit object creation fail (' + tasal + '), please retry ); // } //} //private function editSave():void //{ // var parameters:* = // { // 'CertID': CertIDCol.text, 'AssetID': AssetIDCol.text, 'CertDate': cdt, //'Ccatat': CcatatCol.text, 'CertBy': CertByCol.text, 'StatusID': StatusIDCol.text, //'UpdDate': lele, 'UpdUsr': ApplicationState.instance.luNm }; // doRequest('Update', parameters, saveItemHandler); //} //private function doRequest(method_name:String, parameters:Object, callback:Function):void // { // add the method to the parameters list // parameters['method'] = (method_name + 'ASC'); // gateway.request = parameters; // var call:AsyncToken = gateway.send(); // call.request_params = gateway.request; // call.handler = callback; // } //private function saveItemHandler(e:Object):void // { // if (e.isError) // { // Alert.show('Error: ' + e.data.error); // } // else // { // Alert.show('Record Saved..'); // waitfc = 'OK'; // dispatchEvent( new FlexEvent('SAVEOK') ); // } // }

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >