Search Results

Search found 161 results on 7 pages for 'jennifer michelle'.

Page 7/7 | < Previous Page | 3 4 5 6 7 

  • Jersey 2 in GlassFish 4 - First Java EE 7 Implementation Now Integrated (TOTD #182)

    - by arungupta
    The JAX-RS 2.0 specification released their Early Draft 3 recently. One of my earlier blogs explained as the features were first introduced in the very first draft of the JAX-RS 2.0 specification. Last week was another milestone when the first Java EE 7 specification implementation was added to GlassFish 4 builds. Jakub blogged about Jersey 2 integration in GlassFish 4 builds. Most of the basic functionality is working but EJB, CDI, and Validation are still a TBD. Here is a simple Tip Of The Day (TOTD) sample to get you started with using that functionality. Create a Java EE 6-style Maven project mvn archetype:generate -DarchetypeGroupId=org.codehaus.mojo.archetypes -DarchetypeArtifactId=webapp-javaee6 -DgroupId=example -DartifactId=jersey2-helloworld -DarchetypeVersion=1.5 -DinteractiveMode=false Note, this is still a Java EE 6 archetype, at least for now. Open the project in NetBeans IDE as it makes it much easier to edit/add the files. Add the following <respositories> <repositories> <repository> <id>snapshot-repository.java.net</id> <name>Java.net Snapshot Repository for Maven</name> <url>https://maven.java.net/content/repositories/snapshots/</url> <layout>default</layout> </repository></repositories> Add the following <dependency>s <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.10</version> <scope>test</scope></dependency><dependency> <groupId>javax.ws.rs</groupId> <artifactId>javax.ws.rs-api</artifactId> <version>2.0-m09</version> <scope>test</scope></dependency><dependency> <groupId>org.glassfish.jersey.core</groupId> <artifactId>jersey-client</artifactId> <version>2.0-m05</version> <scope>test</scope></dependency> The complete list of Maven coordinates for Jersey2 are available here. An up-to-date status of Jersey 2 can always be obtained from here. Here is a simple resource class: @Path("movies")public class MoviesResource { @GET @Path("list") public List<Movie> getMovies() { List<Movie> movies = new ArrayList<Movie>(); movies.add(new Movie("Million Dollar Baby", "Hillary Swank")); movies.add(new Movie("Toy Story", "Buzz Light Year")); movies.add(new Movie("Hunger Games", "Jennifer Lawrence")); return movies; }} This resource publishes a list of movies and is accessible at "movies/list" path with HTTP GET. The project is using the standard JAX-RS APIs. Of course, you need the trivial "Movie" and the "Application" class as well. They are available in the downloadable project anyway. Build the project mvn package And deploy to GlassFish 4.0 promoted build 43 (download, unzip, and start as "bin/asadmin start-domain") as asadmin deploy --force=true target/jersey2-helloworld.war Add a simple test case by right-clicking on the MoviesResource class, select "Tools", "Create Tests", and take defaults. Replace the function "testGetMovies" to @Testpublic void testGetMovies() { System.out.println("getMovies"); Client client = ClientFactory.newClient(); List<Movie> movieList = client.target("http://localhost:8080/jersey2-helloworld/webresources/movies/list") .request() .get(new GenericType<List<Movie>>() {}); assertEquals(3, movieList.size());} This test uses the newly defined JAX-RS 2 client APIs to access the RESTful resource. Run the test by giving the command "mvn test" and see the output as ------------------------------------------------------- T E S T S-------------------------------------------------------Running example.MoviesResourceTestgetMoviesTests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.561 secResults :Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 GlassFish 4 contains Jersey 2 as the JAX-RS implementation. If you want to use Jersey 1.1 functionality, then Martin's blog provide more details on that. All JAX-RS 1.x functionality will be supported using standard APIs anyway. This workaround is only required if Jersey 1.x functionality needs to be accessed. The complete source code explained in this project can be downloaded from here. Here are some pointers to follow JAX-RS 2 Specification Early Draft 3 Latest status on specification (jax-rs-spec.java.net) Latest JAX-RS 2.0 Javadocs Latest status on Jersey (Reference Implementation of JAX-RS 2 - jersey.java.net) Latest Jersey API Javadocs Latest GlassFish 4.0 Promoted Build Follow @gf_jersey Provide feedback on Jersey 2 to [email protected] and JAX-RS specification to [email protected].

    Read the article

  • BYOD-The Tablet Difference

    - by Samantha.Y. Ma
    By Allison Kutz, Lindsay Richardson, and Jennifer Rossbach, Sales Consultants Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Less than three years ago, Apple introduced a new concept to the world: The Tablet. It’s hard to believe that in only 32 months, the iPad induced an entire new way to do business. Because of their mobility and ease-of-use, tablets have grown in popularity to keep up with the increasing “on the go” lifestyle, and their popularity isn’t expected to decrease any time soon. In fact, global tablet sales are expected to increase drastically within the next five years, from 56 million tablets to 375 million by 2016. Tablets have been utilized for every function imaginable in today’s world. With over 730,000 active applications available for the iPad, these tablets are educational devices, portable book collections, gateways into social media, entertainment for children when Mom and Dad need a minute on their own, and so much more. It’s no wonder that 74% of those who own a tablet use it daily, 60% use it several times a day, and an average of 13.9 hours per week are spent tapping away. Tablets have become a critical part of a user’s personal life; but why stop there? Businesses today are taking major strides in implementing these devices, with the hopes of benefiting from efficiency and productivity gains. Limo and taxi drivers use tablets as payment devices instead of traditional cash transactions. Retail outlets use tablets to find the exact merchandise customers are looking for. Professors use tablets to teach their classes, and business professionals demonstrate solutions and review reports from tablets. Since an overwhelming majority of tablet users have started to use their personal iPads, PlayBooks, Galaxys, etc. in the workforce, organizations have had to make a change. In many cases, companies are willing to make that change. In fact, 79% of companies are making new investments in mobility this year. Gartner reported that 90% of organizations are expected to support corporate applications on personal devices by 2014. It’s not just companies that are changing. Business professionals have become accustomed to tablets making their personal lives easier, and want that same effect in the workplace. Professionals no longer want to waste time manually entering data in their computer, or worse yet in a notebook, especially when the data has to be later transcribed to an online system. The response: the Bring Your Own Device phenomenon. According to Gartner, BOYD is “an alternative strategy allowing employees, business partners and other users to utilize a personally selected and purchased client device to execute enterprise applications and access data.” Employees whose companies embrace this trend are more efficient because they get to use devices they are already accustomed to. Tablets change the game when it comes to how sales professionals perform their jobs. Sales reps can easily store and access customer information and analytics using tablet applications, such as Oracle Fusion Tap. This method is much more enticing for sales reps than spending time logging interactions on their (what seem to be outdated) computers. Forrester & IDC reported that on average sales reps spend 65% of their time on activities other than selling, so having a tablet application to use on the go is extremely powerful. In February, Information Week released a list of “9 Powerful Business Uses for Tablet Computers,” ranging from “enhancing the customer experience” to “improving data accuracy” to “eco-friendly motivations”. Tablets compliment the lifestyle of professionals who strive to be effective and efficient, both in the office and on the road. Three Things Businesses Need to do to Embrace BYOD Make customer-facing websites tablet-friendly for consistent user experiences Develop tablet applications to continue to enhance the customer experience Embrace and use the technology that comes with tablets Almost 55 million people in the U.S. own tablets because they are convenient, easy, and powerful. These are qualities that companies strive to achieve with any piece of technology. The inherent power of the devices coupled with the growing number of business applications ensures that tablets will transform the way that companies and employees perform.

    Read the article

  • Question SpeechSynthesizer.SetOutputToAudioStream audio format problem

    - by Chris Kugler
    Hi, I'm currently working on an application which requires transmission of speech encoded to a specific audio format. System.Speech.AudioFormat.SpeechAudioFormatInfo synthFormat = new System.Speech.AudioFormat.SpeechAudioFormatInfo(System.Speech.AudioFormat.EncodingFormat.Pcm, 8000, 16, 1, 16000, 2, null); This states that the audio is in PCM format, 8000 samples per second, 16 bits per sample, mono, 16000 average bytes per second, block alignment of 2. When I attempt to execute the following code there is nothing written to my MemoryStream instance; however when I change from 8000 samples per second up to 11025 the audio data is written successfully. SpeechSynthesizer synthesizer = new SpeechSynthesizer(); waveStream = new MemoryStream(); PromptBuilder pbuilder = new PromptBuilder(); PromptStyle pStyle = new PromptStyle(); pStyle.Emphasis = PromptEmphasis.None; pStyle.Rate = PromptRate.Fast; pStyle.Volume = PromptVolume.ExtraLoud; pbuilder.StartStyle(pStyle); pbuilder.StartParagraph(); pbuilder.StartVoice(VoiceGender.Male, VoiceAge.Teen, 2); pbuilder.StartSentence(); pbuilder.AppendText("This is some text."); pbuilder.EndSentence(); pbuilder.EndVoice(); pbuilder.EndParagraph(); pbuilder.EndStyle(); synthesizer.SetOutputToAudioStream(waveStream, synthFormat); synthesizer.Speak(pbuilder); synthesizer.SetOutputToNull(); There are no exceptions or errors recorded when using a sample rate of 8000 and I couldn't find anything useful in the documentation regarding SetOutputToAudioStream and why it succeeds at 11025 samples per second and not 8000. I have a workaround involving a wav file that I generated and converted to the correct sample rate using some sound editing tools, but I would like to generate the audio from within the application if I can. One particular point of interest was that the SpeechRecognitionEngine accepts that audio format and successfully recognized the speech in my synthesized wave file... Update: Recently discovered that this audio format succeeds for certain installed voices, but fails for others. It fails specifically for LH Michael and LH Michelle, and failure varies for certain voice settings defined in the PromptBuilder.

    Read the article

  • Android Expandable List View Update

    - by Gaurav Arora
    I am implementing a chatting application, where I have made a service to listen all the presence changed. On the change of the presence I want to update the data and I am unable to update the data that is showing in the expandable list view. Please suggest me a means to do the same. public class UserMenuActivity extends ExpandableListActivity { private XMPPConnection connection; String name,availability,subscriptionStatus; TextView tv_Status; /** Variable Define here */ private String[] data = { "View my profile", "New Multiperson Chat", "New Broad Cast Message", "New Contact Category", "New Group", "Invite to CCM", "Search", "Expand All", "Settings", "Help", "Close" }; private String[] data_Contact = { "Rename Category","Move Contact to Category", "View my profile", "New Multiperson Chat", "New Broad Cast Message", "New Contact Category", "New Group", "Invite to CCM", "Search", "Expand All", "Settings", "Help", "Close" }; private String[] data_child_contact = { "Open chat", "Delete Contact","View my profile", "New Multiperson Chat", "New Broad Cast Message", "New Contact Category", "New Group", "Invite to CCM", "Search", "Expand All", "Settings", "Help", "Close" }; private String[] menuItem = { "Chats", "Contacts", "CGM Groups", "Pending","Request" }; private List<String> menuItemList = Arrays.asList(menuItem); private int commonGroupPosition = 0; private String etAlertVal; private DatabaseHelper dbHelper; private int categoryID, listPos; /** New Code here.. */ private ArrayList<String> groupNames; private ArrayList<ArrayList<ChildItems>> childs; private UserMenuAdapter adapter; private Object object; private String[] data2 = { "PIN Michelle", "IP Call" }; private ListView mlist2; private ImageButton mimBtnMenu; private LinearLayout mllpopmenu; private View popupView; private PopupWindow popupWindow; private AlertDialog.Builder alert; private EditText input; private TextView mtvUserName, mtvUserTagLine; private ExpandableListView mExpandableListView; public static List<CategoryDataClass> categoryList; private boolean menuType = false; private String childValContact=""; public static Context context; @Override public void onBackPressed() { if (mllpopmenu.getVisibility() == View.VISIBLE) { mllpopmenu.setVisibility(View.INVISIBLE); } else { if (CCMStaticVariable.CommonConnection.isConnected()) { CCMStaticVariable.CommonConnection.disconnect(); } super.onBackPressed(); } } @SuppressWarnings("unchecked") @Override public boolean onKeyDown(int keyCode, KeyEvent event) { if (keyCode == KeyEvent.KEYCODE_MENU) { if (mllpopmenu.getVisibility() == View.VISIBLE) { mllpopmenu.setVisibility(View.INVISIBLE); } else { if (commonGroupPosition >= 4 && menuType == true) { if(childValContact == ""){ mllpopmenu.setVisibility(View.VISIBLE); mlist2.setAdapter(new ArrayAdapter(UserMenuActivity.this, R.layout.listviewtext, R.id.tvMenuText, data_Contact)); }else{ mllpopmenu.setVisibility(View.VISIBLE); mlist2.setAdapter(new ArrayAdapter(UserMenuActivity.this, R.layout.listviewtext, R.id.tvMenuText, data_child_contact)); } } else if (commonGroupPosition == 0) { mllpopmenu.setVisibility(View.VISIBLE); mlist2.setAdapter(new ArrayAdapter(UserMenuActivity.this, R.layout.listviewtext, R.id.tvMenuText, data)); } } return true; } return super.onKeyDown(keyCode, event); } @Override public void onCreate(Bundle icicle) { super.onCreate(icicle); setContentView(R.layout.usermenulayout); dbHelper = new DatabaseHelper(UserMenuActivity.this); //this.context = context.getApplicationContext(); XMPPConn.getContactList(); connection = CCMStaticVariable.CommonConnection; Presence userPresence = new Presence(Presence.Type.available); userPresence.setPriority(24); userPresence.setMode(Presence.Mode.away); connection.sendPacket(userPresence); } @Override protected void onResume() { super.onResume(); Presence userPresence = new Presence(Presence.Type.available); userPresence.setPriority(24); userPresence.setMode(Presence.Mode.away); connection.sendPacket(userPresence); XMPPConn.getContactList(); setExpandableListView(); } public boolean onChildClick(ExpandableListView parent, View v, int groupPosition, int childPosition, long id) { if (groupPosition == 1 && childPosition == 0) { startActivity(new Intent(UserMenuActivity.this, InvitetoCCMActivity.class)); } else if (groupPosition == 1 && childPosition != 0) { Intent intent = new Intent(UserMenuActivity.this, UserChatActivity.class); intent.putExtra("userNameVal", XMPPConn.mfriendList.get(childPosition - 1).friendName); startActivity(intent); } else if (groupPosition == 2 && childPosition == 0) { startActivity(new Intent(UserMenuActivity.this, CreateGroupActivity.class)); } else if (groupPosition == 2 && childPosition != 0) { String GROUP_NAME = childs.get(groupPosition).get(childPosition) .getName().toString(); int end = GROUP_NAME.indexOf("("); CCMStaticVariable.groupName = GROUP_NAME.substring(0, end).trim(); startActivity(new Intent(UserMenuActivity.this, GroupsActivity.class)); } else if (groupPosition >= 4) { childValContact = childs.get(groupPosition).get(childPosition).getName().trim(); showToast("user==>"+childValContact, 0); } return false; } private void setExpandableListView() { /***###############GROUP ARRAY ############################*/ final ArrayList<String> groupNames = new ArrayList<String>(); groupNames.add("Chats (2)"); groupNames.add("Contacts (" + XMPPConn.mfriendList.size() + ")"); groupNames.add("CGM Groups (" + XMPPConn.mGroupList.size() + ")"); groupNames.add("Pending (1)"); XMPPConn.getGroup(); categoryList = dbHelper.getAllCategory(); /**Group From Sever*/ if (XMPPConn.mGroupList.size() > 0) { for (int g = 0; g < XMPPConn.mGroupList.size(); g++) { XMPPConn.getGroupContact(XMPPConn.mGroupList.get(g).groupName); groupNames.add(XMPPConn.mGroupList.get(g).groupName + "(" + XMPPConn.mGroupContactList.size()+ ")"); } } if(categoryList.size() > 0){ for (int cat = 0; cat < categoryList.size(); cat++) { groupNames.add(categoryList.get(cat).getCategoryName()+ "(0)"); } } this.groupNames = groupNames; /*** ###########CHILD ARRAY * #################*/ ArrayList<ArrayList<ChildItems>> childs = new ArrayList<ArrayList<ChildItems>>(); ArrayList<ChildItems> child = new ArrayList<ChildItems>(); child.add(new ChildItems("Alisha", "Hi",0)); child.add(new ChildItems("Michelle", "Good Morning",0)); childs.add(child); child = new ArrayList<ChildItems>(); child.add(new ChildItems("", "",0)); if (XMPPConn.mfriendList.size() > 0) { for (int n = 0; n < XMPPConn.mfriendList.size(); n++) { child.add(new ChildItems(XMPPConn.mfriendList.get(n).friendNickName, XMPPConn.mfriendList.get(n).friendStatus, XMPPConn.mfriendList.get(n).friendState)); } } childs.add(child); /************** CGM Group Child here *********************/ child = new ArrayList<ChildItems>(); child.add(new ChildItems("", "",0)); if (XMPPConn.mGroupList.size() > 0) { for (int grop = 0; grop < XMPPConn.mGroupList.size(); grop++) { child.add(new ChildItems( XMPPConn.mGroupList.get(grop).groupName + " (" + XMPPConn.mGroupList.get(grop).groupUserCount + ")", "",0)); } } childs.add(child); child = new ArrayList<ChildItems>(); child.add(new ChildItems("Shuchi", "Pending (Waiting for Authorization)",0)); childs.add(child); /************************ Group Contact List *************************/ if (XMPPConn.mGroupList.size() > 0) { for (int g = 0; g < XMPPConn.mGroupList.size(); g++) { /** Contact List */ XMPPConn.getGroupContact(XMPPConn.mGroupList.get(g).groupName); child = new ArrayList<ChildItems>(); for (int con = 0; con < XMPPConn.mGroupContactList.size(); con++) { child.add(new ChildItems( XMPPConn.mGroupContactList.get(con).friendName, XMPPConn.mGroupContactList.get(con).friendStatus,0)); } childs.add(child); } } if(categoryList.size() > 0){ for (int cat = 0; cat < categoryList.size(); cat++) { child = new ArrayList<ChildItems>(); child.add(new ChildItems("-none-", "",0)); childs.add(child); } } this.childs = childs; /** Set Adapter here */ adapter = new UserMenuAdapter(this, groupNames, childs); setListAdapter(adapter); object = this; mlist2 = (ListView) findViewById(R.id.list2); mimBtnMenu = (ImageButton) findViewById(R.id.imBtnMenu); mllpopmenu = (LinearLayout) findViewById(R.id.llpopmenu); mtvUserName = (TextView) findViewById(R.id.tvUserName); mtvUserTagLine = (TextView) findViewById(R.id.tvUserTagLine); //Set User name.. System.out.println("CCMStaticVariable.loginUserName===" + CCMStaticVariable.loginUserName); if (!CCMStaticVariable.loginUserName.equalsIgnoreCase("")) { mtvUserName.setText("" + CCMStaticVariable.loginUserName); } /** Expandable List set here.. */ mExpandableListView = (ExpandableListView) this .findViewById(android.R.id.list); mExpandableListView.setOnGroupClickListener(new OnGroupClickListener() { @Override public boolean onGroupClick(ExpandableListView parent, View v, int groupPosition, long id) { XMPPConn.getContactList(); if (parent.isGroupExpanded(groupPosition)) { commonGroupPosition = 0; }else{ commonGroupPosition = groupPosition; } String GROUP_NAME = groupNames.get(groupPosition); int end = groupNames.get(groupPosition).indexOf("("); String GROUP_NAME_VALUE = GROUP_NAME.substring(0, end).trim(); if (menuItemList.contains(GROUP_NAME_VALUE)) { menuType = false; CCMStaticVariable.groupCatName = GROUP_NAME_VALUE; } else { menuType = true; CCMStaticVariable.groupCatName = GROUP_NAME_VALUE; } long findCatId = dbHelper.getCategoryID(GROUP_NAME_VALUE); if (findCatId != 0) { categoryID = (int) findCatId; } childValContact=""; showToast("Clicked on==" + GROUP_NAME_VALUE, 0); return false; } }); /** Click on item */ mlist2.setOnItemClickListener(new OnItemClickListener() { @Override public void onItemClick(AdapterView<?> arg0, View arg1, int pos,long arg3) { if (commonGroupPosition >= 4) { if(childValContact == ""){ if (pos == 0) { showAlertEdit(CCMStaticVariable.groupCatName); } /** Move contact to catgory */ if (pos == 1) { startActivity(new Intent(UserMenuActivity.this,AddContactCategoryActivity.class)); } }else{ if(pos == 0){ Intent intent = new Intent(UserMenuActivity.this,UserChatActivity.class); intent.putExtra("userNameVal",childValContact); startActivity(intent); } if(pos == 1){ XMPPConn.removeEntry(childValContact); showToast("Contact deleted sucessfully", 0); Intent intent = new Intent(UserMenuActivity.this,UserMenuActivity.class); } } } else { /** MyProfile */ if (pos == 0) { startActivity(new Intent(UserMenuActivity.this, MyProfileActivity.class)); } /** New multiperson chat start */ if (pos == 1) { startActivity(new Intent(UserMenuActivity.this, NewMultipersonChatActivity.class)); } /** New Broadcast message */ if (pos == 2) { startActivity(new Intent(UserMenuActivity.this, NewBroadcastMessageActivity.class)); } /** Click on add category */ if (pos == 3) { showAlertAdd(); } if (pos == 4) { startActivity(new Intent(UserMenuActivity.this, CreateGroupActivity.class)); } if (pos == 5) { startActivity(new Intent(UserMenuActivity.this, InvitetoCCMActivity.class)); } if (pos == 6) { startActivity(new Intent(UserMenuActivity.this, SearchActivity.class)); } if (pos == 7) { onGroupExpand(2); for (int i = 0; i < groupNames.size(); i++) { mExpandableListView.expandGroup(i); } } /** Click on settings */ if (pos == 8) { startActivity(new Intent(UserMenuActivity.this, SettingsActivity.class)); } if (pos == 10) { System.exit(0); } if (pos == 14) { if (mllpopmenu.getVisibility() == View.VISIBLE) { mllpopmenu.setVisibility(View.INVISIBLE); if (popupWindow.isShowing()) { popupWindow.dismiss(); } } else { mllpopmenu.setVisibility(View.VISIBLE); mlist2.setAdapter(new ArrayAdapter( UserMenuActivity.this, R.layout.listviewtext, R.id.tvMenuText, data)); } } } } }); } /** Toast message display here.. */ private void showToast(String msg, int time) { Toast.makeText(this, msg, time).show(); } public String showSubscriptionStatus(String friend){ return friend; } } Service.class public class UpdaterService extends Service { private XMPPConnection connection; String Friend; String user = ""; @Override public IBinder onBind(Intent arg0) { // TODO Auto-generated method stub return null; } @Override public void onCreate() { // Toast.makeText(this, "My Service Created", Toast.LENGTH_LONG).show(); super.onCreate(); } @Override public void onDestroy() { // TODO Auto-generated method stub super.onDestroy(); } @Override public void onStart(Intent intent, int startId) { // TODO Auto-generated method stub showToast("My Service Started", 0); connection = getConnection(); if (connection.isConnected()) { final Roster roster = connection.getRoster(); RosterListener r1 = new RosterListener() { @Override public void presenceChanged(Presence presence) { // TODO Auto-generated method stub XMPPConn.getContactList(); } @Override public void entriesUpdated(Collection<String> arg0) { // TODO Auto-generated method stub //notification("entriesUpdated"); } @Override public void entriesDeleted(Collection<String> arg0) { // TODO Auto-generated method stub //notification("entriesDeleted"); } @Override public void entriesAdded(Collection<String> arg0) { // TODO Auto-generated method stub Iterator<String> it = arg0.iterator(); if (it.hasNext()) { user = it.next(); } RosterEntry entry = roster.getEntry(user); if(entry.getType().toString().equalsIgnoreCase("to")){ int index_of_Alpha = Friend.indexOf("@"); String subID = Friend.substring(0, index_of_Alpha); notification("Hi "+subID+" wants to add you"); } } }; if (roster != null) { roster.setSubscriptionMode(Roster.SubscriptionMode.manual); System.out.println("subscription going on"); roster.addRosterListener(r1); } } else { showToast("Connection lost-", 0); } } protected void showToast(String msg, int time) { Toast.makeText(this, msg, time).show(); } private XMPPConnection getConnection() { return CCMStaticVariable.CommonConnection; } /** Notification manager */ private void notification(CharSequence message) { String ns = Context.NOTIFICATION_SERVICE; NotificationManager mNotificationManager = (NotificationManager) getSystemService(ns); int icon = R.drawable.ic_launcher; CharSequence tickerText = message; long when = System.currentTimeMillis(); Notification notification = new Notification(icon, tickerText, when); Context context = getApplicationContext(); CharSequence contentTitle = "CCM"; CharSequence contentText = message; Intent notificationIntent = new Intent(this, ManageNotification.class); notificationIntent.putExtra("Subscriber_ID",user ); PendingIntent contentIntent = PendingIntent.getActivity(this, 0, notificationIntent, 0); notification.setLatestEventInfo(context, contentTitle, contentText, contentIntent); notification.flags |= Notification.FLAG_AUTO_CANCEL; final int HELLO_ID = 1; mNotificationManager.notify(HELLO_ID, notification); } } Here is my adapter class public class UserMenuAdapter extends BaseExpandableListAdapter { private ArrayList<String> groups; private ArrayList<ArrayList<ChildItems>> childs; private Context context; public LayoutInflater inflater; ImageView img_availabiliy; private static final int[] EMPTY_STATE_SET = {}; private static final int[] GROUP_EXPANDED_STATE_SET = {android.R.attr.state_expanded}; private static final int[][] GROUP_STATE_SETS = { EMPTY_STATE_SET, // 0 GROUP_EXPANDED_STATE_SET // 1 }; public UserMenuAdapter(Context context, ArrayList<String> groups, ArrayList<ArrayList<ChildItems>> childs) { this.context = context; this.groups = groups; this.childs = childs; inflater = LayoutInflater.from(context); } @Override public Object getChild(int groupPosition, int childPosition) { return childs.get(groupPosition).get(childPosition); } @Override public long getChildId(int groupPosition, int childPosition) { return (long) (groupPosition * 1024 + childPosition); } @Override public View getChildView(int groupPosition, int childPosition, boolean isLastChild, View convertView, ViewGroup parent) { View v = null; if (convertView != null) v = convertView; else v = inflater.inflate(R.layout.child_layout, parent, false); ChildItems ci = (ChildItems) getChild(groupPosition, childPosition); TextView tv = (TextView) v.findViewById(R.id.tvChild); tv.setText(ci.getName()); TextView tv2 = (TextView) v.findViewById(R.id.tvChild2); tv2.setText(ci.getDailyStatus()); img_availabiliy = (ImageView)v.findViewById(R.id.img_childlayout_AVAILABILITY); ImageView friendPics = (ImageView)v.findViewById(R.id.ivFriendPics); if(ci.getStatusState() == 1){ img_availabiliy.setImageResource(R.drawable.online); } else if(ci.getStatusState()==0){ img_availabiliy.setImageResource(R.drawable.offline); } else if (ci.getStatusState()==2) { img_availabiliy.setImageResource(R.drawable.away); } else if(ci.getStatusState()==3){ img_availabiliy.setImageResource(R.drawable.busy); } else{ img_availabiliy.setImageDrawable(null); } if((groupPosition == 1 && childPosition == 0)){ friendPics.setImageResource(R.drawable.inviteto_ccm); img_availabiliy.setVisibility(View.INVISIBLE); } else if(groupPosition == 2 && childPosition == 0){ friendPics.setImageResource(R.drawable.new_ccmgroup); img_availabiliy.setVisibility(View.VISIBLE); }else{ if(ci.getPicture()!= null){ Bitmap bitmap = BitmapFactory.decodeByteArray(ci.getPicture(), 0, ci.getPicture().length); bitmap = Bitmap.createScaledBitmap(bitmap, 50, 50, true); friendPics.setImageBitmap(bitmap); }else{ friendPics.setImageResource(R.drawable.avatar); } img_availabiliy.setVisibility(View.VISIBLE); } return v; } @Override public int getChildrenCount(int groupPosition) { return childs.get(groupPosition).size(); } @Override public Object getGroup(int groupPosition) { return groups.get(groupPosition); } @Override public int getGroupCount() { return groups.size(); } @Override public long getGroupId(int groupPosition) { return (long) (groupPosition * 1024); } @Override public View getGroupView(int groupPosition, boolean isExpanded, View convertView, ViewGroup parent) { View v = null; if (convertView != null) v = convertView; else v = inflater.inflate(R.layout.group_layout, parent, false); String gt = (String) getGroup(groupPosition); TextView tv2 = (TextView) v.findViewById(R.id.tvGroup); if (gt != null) tv2.setText(gt); /**Set Image on group layout, Max/min*/ View ind = v.findViewById( R.id.explist_indicator); View groupInd = v.findViewById( R.id.llgroup); if( ind != null ) { ImageView indicator = (ImageView)ind; if( getChildrenCount( groupPosition ) == 0 ) { indicator.setVisibility( View.INVISIBLE ); } else { indicator.setVisibility( View.VISIBLE ); int stateSetIndex = ( isExpanded ? 1 : 0) ; Drawable drawable = indicator.getDrawable(); drawable.setState(GROUP_STATE_SETS[stateSetIndex]); } } if( groupInd != null ) { RelativeLayout indicator2 = (RelativeLayout)groupInd; if( getChildrenCount( groupPosition ) == 0 ) { indicator2.setVisibility( View.INVISIBLE ); } else { indicator2.setVisibility( View.VISIBLE ); int stateSetIndex = ( isExpanded ? 1 : 0) ; Drawable drawable2 = indicator2.getBackground(); drawable2.setState(GROUP_STATE_SETS[stateSetIndex]); } } return v; } @Override public boolean hasStableIds() { return true; } @Override public boolean isChildSelectable(int groupPosition, int childPosition) { return true; } public void onGroupCollapsed(int groupPosition) { } public void onGroupExpanded(int groupPosition) { } } I just want to update my list in ON PRESENCE CHANGED method in the Service class.. Please suggest me a means to do the same.

    Read the article

  • Probation is Over: PASS Board Year 1, Q2

    - by Denise McInerney
    Though it's not always official every job begins with a probation period. You start out with lots of questions and every day you find out how much more you have to learn. Usually after a few months you discover that you can actually answer some questions and have at least an idea of what you are supposed to be doing. Now at the end of my second quarter on the "job" of serving on the PASS Board I have reached that point. My probation period is over. The last three months were busy for the entire Board with the budget process, an in-person meeting and moving forward with PASS Global Growth plans. I had also set a specific goal for myself for my 2nd quarter: to see the Board to adopt a Code of Conduct for the PASS Summit. Code of Conduct When I ran for the Board I included my desire to see PASS establish a code of conduct in my campaign platform.  I was motivated to do this for a few reasons. Other technical conferences have had incidents of harassment. Most of these did not have a policy in place prior to having a problem, though several conference organizers have since adopted anti-harassment policies or codes of conduct. I felt it would be in PASS' interest to establish a policy so we would be prepared should there be an incident.   "This is Community" Adopting a code of conduct would reinforce our community orientation and send a message about the positive character of the Summit. PASS is a leader among technical organizations for its promotion and support of women. Adopting a code of conduct would further demonstrate our leadership in this area. After researching similar polices from other organizations I published a first draft in April. I solicited feedback from the Board, HQ staff and some PASS members. Incorporating that feedback I presented version 4 at the May Board meeting, where we had a good discussion. You can read the meeting minutes for details. I incorporated points from  the Board discussion as well as feedback from a legal review to produce a final version which has been submitted to the Board. It will be discussed at the Board meeting July 12. You can read the full text at the end of this post. Virtual Chapters In the first quarter we started ramping up marketing support for the Virtual Chapters. Since then each edition of the Connector has highlighted a different VC to help get out the message about the variety of eductional opporutnities that are offered. These VC profiles will continue in the coming months. I was very pleased to welcome the new DBA Fundamentals VC which is geared toward new DBAs, people who are considering entering the field and those transitioning from a different IT role. Thanks to the contributions of Erin Stellato, Michelle Nalliah and Karla Landrum we published a "Virtual Chapter Guidebook". This document includes great advice on how to build and promote a VC. It's also a reference for how things work, from budgets to webinar hosting. I think this document will be extremely valuable to all our VC leaders and am grateful to those who put it together. Board Meeting/SQL Rally The Board met in May in Dallas. Among the items discussed were Global Growth, the budget, future events and the upcoming elections. We covered a lot of ground in two days and I will again refer you to the meeting minutes for details. The meeting schedule allowed us to participate in the SQL Rally networking events and one full day of the conference. I enjoyed having the opportunity to meet and talk with many PASS members. And my hat is off to the SQL Rally organizers who put on an outstanding event. Global Growth PASS has undertaken a major intitiative to reach and engage SQL Server professionals around the world. This Global Growth plan is ambitious and will have a significant impact on the strategic direction of the organization. We have been reaching out to the community for feedback, including hosting Twitter chats and live Town Hall meetings. I co-hosted two of these events and appreciated hearing the different perspectives of the people who participated If you have not done so I encourage you to read about the Global Growth vision and proposed governance changes  and submit your feedback. FY13 Budget July 1 is the beginning of PASS' fiscal year, which makes the end of June the deadline for approving a budget. Each director submits a budget for his or her portfolio. For the Virtual Chapter portfolio I focused on how we can allocate resources to grow the VCs. Budgeting is a give-and-take process, and while I didn't get everything I asked for I'm pleased the FY13 budget includes a significant increase in financial support for the Virtual Chapters. Many people put a lot of work into the budget, but no two people deserve credit more than VP of Finance Douglas McDowell and Accounting Manager Sandy Cherry. Thanks to both of them for getting us across the goal line on time. SQL Saturday I attended SQL Saturdays in Orange Co. CA and Phoenix. It's always inspiring to see the enthusiasm in the community for learning and networking. These events are successful due to the hard work of many volunteers. Thanks to the organizers in both cities for all your efforts. Next Up This quarter we'll be gearing up plans for the VCs at the Summit and exploring ways the VCs can best support PASS' Global Growth work. I'll also be wrapping up work on the Code of Conduct and attending a Board meeting in September. And I will be at SQL Saturday #144 in Sacramento later this month. Here is the language of the Code of Conduct I have submitted to the Board for consideration: PASS Code of Conduct The PASS Summit provides database professionals from a variety of backgrounds with an opportunity to connect, share and learn.  We value the strong sense of community that characterizes this event and we seek to foster an inclusive, professional atmosphere. We are dedicated to providing a harassment-free conference experience for everyone, regardless of gender, race, sexual orientation, disability, physical appearance, religion or any other protected classification.  Everyone at the Summit is expected to follow the Code of Conduct. This includes but is not limited to: PASS Staff, Exhibitors, Speakers, Attendees and anyone affiliated with the event. Participants are expected to follow the Code of Conduct at all Summit events, including PASS-sponsored social events. Participant behavior Harassment includes, but is not limited to, offensive verbal comments related to gender, race, sexual orientation, disability, physical appearance, religion, or any other protected classification.  Intimidation, threats, stalking, harassing photography or recording, sustained disruption of talks or other events, inappropriate physical contact and unwelcome attention will also be considered harassment. Similarly, sexual, racist, derogatory, threatening or other inappropriate language and imagery are not appropriate for any conference venue, including sessions.  Recourse If a participant engages in any conduct that is prohibited under this Code of Conduct, the conference organizers may take any action they deem appropriate, including warning the offender or expelling the offender from the conference. No refunds will be granted to attendees expelled from the Summit due to violations of the Code of Conduct. If you are being harassed, witness harassment, or have any other concerns, please contact a member of conference staff immediately. Conference staff can be identified by their “Headquarters/Staff” shirts and are trained to handle the situation appropriately. A Code of Conduct Committee (CCC) made up of the Executive Manager and three members of the Board of Directors designated by the President will be authorized to take action in response to an incident or behavior that violates the Code of Conduct.

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • How to organize pictures on website using css? [on hold]

    - by user3624023
    Here is my website without any CSS: http://www.wmcicompsci.ca/cs20/students/theglowcloud/Bare%20bones%20website/classics_bare.html I am new to CSS and I would like to organize pictures these pictures in this fashion: http://css-tricks.com/examples/SlideinCaptions/ I would just like this layout for the pictures but I do not need the sliding of the captions(although I would like to but it does not work my browser). I would like the captions to be like titles on top of the pictures. Here is my current html code: <!DOCTYPE html> <html> <head> <title> My favourite Fantasy books</title> <meta charset = "utf-8"> <link rel="stylesheet" href="css.css"> </head> <body> <nav id="main_nav"> <ul> <li><a href = " homepage_css.html"> Homepage</a></li> <li><a href="science_fiction_css.html">Science Fiction</a></li> <li><a href="classics_css.html">Classics</a></li> <li><a href="fantasy_css.html">Fantasy</a></li> </ul> </nav> <h1> Fantasy Genre</h1> <p> Here are my favourites:</p> <ul> <li> Goblet of Fire by J.K Rowling (4th book in the Harry Potter Series) </li> <li><img class= displayed src="pics/fantasy/goblet_of_fire.jpg" width="200" alt="Goblet of Fire book cover"></li> <li> Graceling by Kristan Cashore </li> <li><img src="pics/fantasy/graceling.jpg" width="200" alt = " Graceling book cover"></li> <li> Serpent's Shadow by Rick Riordan (3rd book in the Kane Chronicles) </li> <li><img src="pics/fantasy/serpents_shadow.jpg" width="200" alt="Serpent's Shadow book cover"></li> <li> The Hobbit by J.R.R. Tolkein </li> <li><img src="pics/fantasy/the_hobbit.jpg" width="200" alt="The Hobbit book cover"></li> <li> The False Prince by Jennifer Neilson (1st book in the Ascendance Triology) </li> <li><img src="pics/fantasy/the_false_prince.jpg" width="200" alt="The False Prince book cover"></li> </ul>

    Read the article

  • Real Excel Templates I

    - by Tim Dexter
    As promised, I'm starting to document the new Excel templates that I teased you all with a few weeks back. Leslie is buried in 11g documentation and will not get to officially documenting the templates for a while. I'll do my best to be professional and not ramble on about this and that, although the weather here has finally turned and its 'scorchio' here in Colorado today. Maybe our stand of Aspen will finally come into leaf ... but I digress. Preamble These templates are not actually that new, I helped in a small way to develop them a few years back with Excel 'meistress' Shirley for a company that was trying to use the Report Manager(RR) Excel FSG outputs under EBS 12. The functionality they needed was just not there in the RR FSG templates, the templates are actually XSL that is created from the the RR Excel template builder and fed to BIP for processing. Think of Excel from our RTF templates and you'll be there ie not really Excel but HTML masquerading as Excel. Although still under controlled release in EBS they have now made their way to the standlone release and are willing to share their Excel goodness. You get everything you have with hte Excel Analyzer Excel templates plus so much more. Therein lies a question, what will happen to the Analyzer templates? My understanding is that both will come together into a single Excel template format some time in the post-11g release world. The new XLSX format for Exce 2007/10 is also in the mix too so watch this space. What more do these templates offer? Well, you can structure data in the Excel output. Similar to RTF templates you can create sheets of data that have master-detail n relationships. Although the analyzer templates can do this, you have to get into macros whereas BIP will do this all for you. You can also use native XSL functions in your data to manipulate it prior to rendering. BP functions are not currently supported. The most impressive, for me at least, is the sheet 'bursting'. You can split your hierarchical data across multiple sheets and dynamically name those sheets. Finally, you of course, still get all the native Excel functionality. Pre-reqs You must be on 10.1.3.4.1 plus the latest rollup patch, 9546699. You can patch upa BIP instance running with OBIEE, no problem You need Excel 2000 or above to build the templates Some patience - there is no Excel template builder for these new templates. So its all going to have to be done by hand. Its not that tough but can get a little 'fiddly'. You can not test the template from Excel , it has to be deployed and then run. Limitations The new templates are definitely superior to the Analyzer templates but there are a few limitations. Re-grouping is not supported. You can only follow a data hierarchy not bend it to your will unless you want to get into macros. No support for BIP functions. The templates support native XSL functions only. No template builder Getting Started The templates make the use of named cells and groups of cells to allow BIP to find the insertion point for data points. It also uses a hidden sheet to store calculation mappings from named cells to XML data elements. To start with, in the great BIP tradition, we need some sample XML data. Becasue I wanted to show the master-detail output we need some hierarchical data. If you have not yet gotten into the data templates, now is a good time, I wrote a post a while back starting from the simple to more complex. They generate ideal data sets for these templates. Im working with the following data set: <EMPLOYEES> <LIST_G_DEPT> <G_DEPT> <DEPARTMENT_ID>10</DEPARTMENT_ID> <DEPARTMENT_NAME>Administration</DEPARTMENT_NAME> <LIST_G_EMP> <G_EMP> <EMPLOYEE_ID>200</EMPLOYEE_ID> <EMP_NAME>Jennifer Whalen</EMP_NAME> <EMAIL>JWHALEN</EMAIL> <PHONE_NUMBER>515.123.4444</PHONE_NUMBER> <HIRE_DATE>1987-09-17T00:00:00.000-06:00</HIRE_DATE> <SALARY>4400</SALARY> </G_EMP> </LIST_G_EMP> <TOTAL_EMPS>1</TOTAL_EMPS> <TOTAL_SALARY>4400</TOTAL_SALARY> <AVG_SALARY>4400</AVG_SALARY> <MAX_SALARY>4400</MAX_SALARY> <MIN_SALARY>4400</MIN_SALARY> </G_DEPT> ... <LIST_G_DEPT> <EMPLOYEES> Simple enough to follow and bread and butter stuff for an RTF template. Building the Template For an Excel template we need to start by thinking about how we want to render the data. Come up with a sample output in Excel. Its all dummy data, nothing marked up yet with one row of data for each level. I have the department name and then a repeating row for the employees. You can apply Excel formatting to the layout. The total is going to be derived from a data element. We'll get to Excel functions later. Marking Up Cells Next we need to start marking up the cells with custom names to map them to data elements. The cell names need to follow a specific format: For data grouping, XDO_GROUP_?group_name? For data elements, XDO_?element_name? Notice the question mark delimter, the group_name and element_name are case sensitive. The next step is to find how to name cells; the easiest method is to highlight the cell and then type in the name. You can also find the Name Manager dialog. I use 2007 and its available on the ribbon under the Formulas section Go thorugh the process of naming all the cells for the element values you have. Using my data set from above.You should end up with something like this in your 'Name Manager' dialog. You can update any mistakes you might have made through this dialog. Creating Groups In the image above you can see there are a couple of named group cells. To create these its a simple case of highlighting the cells that make up the group and then naming them. For the EMP group, highlight the employee row and then type in the name, XDO_GROUP?G_EMP? Notice the 10,000 total is outside of the G_EMP group. Its actually named, XDO_?TOTAL_SALARY?, a query calculated value. For the department group, we need to include the department name cell and the sub EMP grouping and name it, XDO_GROUP?G_DEPT? Notice, the 10,000 total is included in the G_DEPT group. This will ensure it repeats at the department level. Lastly, we do need to include a special sheet in the workbook. We will not have anything meaningful in there for now, but it needs to be present. Create a new sheet and name it XDO_METADATA. The name is important as the BIP rendering engine will looking for it. For our current example we do not need anything other than the required stuff in our XDO_METADATA sheet but, it must be present. Easy enough to hide it. Here's what I have: The only cell that is important is the 'Data Constraints:' cell. The rest is optional. To save curious users getting distracted, hide the metadata sheet. Deploying & Running Templates We should now have a usable Excel template. Loading it into a report is easy enough using the browser UI, just like an RTF template. Set the template type to Excel. You will now be able to run the report and hopefully get something like this. You will not get the red highlighting, thats just some conditional formatting I added to the template using Excel functionality. Your dates are probably going to look raw too. I got around this for now using an Excel function on the cell: =--REPLACE(SUBSTITUTE(E8,"T"," "),LEN(E8)-6,6,"") Google to the rescue on that one. Try some other stuff out. To avoid constantly loading the template through the UI. If you have BIP running locally or you can access the reports repository, once you have loaded the template the first time. Just save the template directly into the report folder. I have put together a sample report using a sample data set, available here. Just drop the xml data file, EmpbyDeptExcelData.xml into 'demo files' folder and you should be good to go. Thats the basics, next we'll start using some XSL functions in the template and move onto the 'bursting' across sheets.

    Read the article

  • Uploading and Importing CSV file to SQL Server in ASP.NET WebForms

    - by Vincent Maverick Durano
    Few weeks ago I was working with a small internal project  that involves importing CSV file to Sql Server database and thought I'd share the simple implementation that I did on the project. In this post I will demonstrate how to upload and import CSV file to SQL Server database. As some may have already know, importing CSV file to SQL Server is easy and simple but difficulties arise when the CSV file contains, many columns with different data types. Basically, the provider cannot differentiate data types between the columns or the rows, blindly it will consider them as a data type based on first few rows and leave all the data which does not match the data type. To overcome this problem, I used schema.ini file to define the data type of the CSV file and allow the provider to read that and recognize the exact data types of each column. Now what is schema.ini? Taken from the documentation: The Schema.ini is a information file, used to define the data structure and format of each column that contains data in the CSV file. If schema.ini file exists in the directory, Microsoft.Jet.OLEDB provider automatically reads it and recognizes the data type information of each column in the CSV file. Thus, the provider intelligently avoids the misinterpretation of data types before inserting the data into the database. For more information see: http://msdn.microsoft.com/en-us/library/ms709353%28VS.85%29.aspx Points to remember before creating schema.ini:   1. The schema information file, must always named as 'schema.ini'.   2. The schema.ini file must be kept in the same directory where the CSV file exists.   3. The schema.ini file must be created before reading the CSV file.   4. The first line of the schema.ini, must the name of the CSV file, followed by the properties of the CSV file, and then the properties of the each column in the CSV file. Here's an example of how the schema looked like: [Employee.csv] ColNameHeader=False Format=CSVDelimited DateTimeFormat=dd-MMM-yyyy Col1=EmployeeID Long Col2=EmployeeFirstName Text Width 100 Col3=EmployeeLastName Text Width 50 Col4=EmployeeEmailAddress Text Width 50 To get started lets's go a head and create a simple blank database. Just for the purpose of this demo I created a database called TestDB. After creating the database then lets go a head and fire up Visual Studio and then create a new WebApplication project. Under the root application create a folder called UploadedCSVFiles and then place the schema.ini on that folder. The uploaded CSV files will be stored in this folder after the user imports the file. Now add a WebForm in the project and set up the HTML mark up and add one (1) FileUpload control one(1)Button and three (3) Label controls. After that we can now proceed with the codes for uploading and importing the CSV file to SQL Server database. Here are the full code blocks below: 1: using System; 2: using System.Data; 3: using System.Data.SqlClient; 4: using System.Data.OleDb; 5: using System.IO; 6: using System.Text; 7:   8: namespace WebApplication1 9: { 10: public partial class CSVToSQLImporting : System.Web.UI.Page 11: { 12: private string GetConnectionString() 13: { 14: return System.Configuration.ConfigurationManager.ConnectionStrings["DBConnectionString"].ConnectionString; 15: } 16: private void CreateDatabaseTable(DataTable dt, string tableName) 17: { 18:   19: string sqlQuery = string.Empty; 20: string sqlDBType = string.Empty; 21: string dataType = string.Empty; 22: int maxLength = 0; 23: StringBuilder sb = new StringBuilder(); 24:   25: sb.AppendFormat(string.Format("CREATE TABLE {0} (", tableName)); 26:   27: for (int i = 0; i < dt.Columns.Count; i++) 28: { 29: dataType = dt.Columns[i].DataType.ToString(); 30: if (dataType == "System.Int32") 31: { 32: sqlDBType = "INT"; 33: } 34: else if (dataType == "System.String") 35: { 36: sqlDBType = "NVARCHAR"; 37: maxLength = dt.Columns[i].MaxLength; 38: } 39:   40: if (maxLength > 0) 41: { 42: sb.AppendFormat(string.Format(" {0} {1} ({2}), ", dt.Columns[i].ColumnName, sqlDBType, maxLength)); 43: } 44: else 45: { 46: sb.AppendFormat(string.Format(" {0} {1}, ", dt.Columns[i].ColumnName, sqlDBType)); 47: } 48: } 49:   50: sqlQuery = sb.ToString(); 51: sqlQuery = sqlQuery.Trim().TrimEnd(','); 52: sqlQuery = sqlQuery + " )"; 53:   54: using (SqlConnection sqlConn = new SqlConnection(GetConnectionString())) 55: { 56: sqlConn.Open(); 57: SqlCommand sqlCmd = new SqlCommand(sqlQuery, sqlConn); 58: sqlCmd.ExecuteNonQuery(); 59: sqlConn.Close(); 60: } 61:   62: } 63: private void LoadDataToDatabase(string tableName, string fileFullPath, string delimeter) 64: { 65: string sqlQuery = string.Empty; 66: StringBuilder sb = new StringBuilder(); 67:   68: sb.AppendFormat(string.Format("BULK INSERT {0} ", tableName)); 69: sb.AppendFormat(string.Format(" FROM '{0}'", fileFullPath)); 70: sb.AppendFormat(string.Format(" WITH ( FIELDTERMINATOR = '{0}' , ROWTERMINATOR = '\n' )", delimeter)); 71:   72: sqlQuery = sb.ToString(); 73:   74: using (SqlConnection sqlConn = new SqlConnection(GetConnectionString())) 75: { 76: sqlConn.Open(); 77: SqlCommand sqlCmd = new SqlCommand(sqlQuery, sqlConn); 78: sqlCmd.ExecuteNonQuery(); 79: sqlConn.Close(); 80: } 81: } 82: protected void Page_Load(object sender, EventArgs e) 83: { 84:   85: } 86: protected void BTNImport_Click(object sender, EventArgs e) 87: { 88: if (FileUpload1.HasFile) 89: { 90: FileInfo fileInfo = new FileInfo(FileUpload1.PostedFile.FileName); 91: if (fileInfo.Name.Contains(".csv")) 92: { 93:   94: string fileName = fileInfo.Name.Replace(".csv", "").ToString(); 95: string csvFilePath = Server.MapPath("UploadedCSVFiles") + "\\" + fileInfo.Name; 96:   97: //Save the CSV file in the Server inside 'MyCSVFolder' 98: FileUpload1.SaveAs(csvFilePath); 99:   100: //Fetch the location of CSV file 101: string filePath = Server.MapPath("UploadedCSVFiles") + "\\"; 102: string strSql = "SELECT * FROM [" + fileInfo.Name + "]"; 103: string strCSVConnString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + filePath + ";" + "Extended Properties='text;HDR=YES;'"; 104:   105: // load the data from CSV to DataTable 106:   107: OleDbDataAdapter adapter = new OleDbDataAdapter(strSql, strCSVConnString); 108: DataTable dtCSV = new DataTable(); 109: DataTable dtSchema = new DataTable(); 110:   111: adapter.FillSchema(dtCSV, SchemaType.Mapped); 112: adapter.Fill(dtCSV); 113:   114: if (dtCSV.Rows.Count > 0) 115: { 116: CreateDatabaseTable(dtCSV, fileName); 117: Label2.Text = string.Format("The table ({0}) has been successfully created to the database.", fileName); 118:   119: string fileFullPath = filePath + fileInfo.Name; 120: LoadDataToDatabase(fileName, fileFullPath, ","); 121:   122: Label1.Text = string.Format("({0}) records has been loaded to the table {1}.", dtCSV.Rows.Count, fileName); 123: } 124: else 125: { 126: LBLError.Text = "File is empty."; 127: } 128: } 129: else 130: { 131: LBLError.Text = "Unable to recognize file."; 132: } 133:   134: } 135: } 136: } 137: } The code above consists of three (3) private methods which are the GetConnectionString(), CreateDatabaseTable() and LoadDataToDatabase(). The GetConnectionString() is a method that returns a string. This method basically gets the connection string that is configured in the web.config file. The CreateDatabaseTable() is method that accepts two (2) parameters which are the DataTable and the filename. As the method name already suggested, this method automatically create a Table to the database based on the source DataTable and the filename of the CSV file. The LoadDataToDatabase() is a method that accepts three (3) parameters which are the tableName, fileFullPath and delimeter value. This method is where the actual saving or importing of data from CSV to SQL server happend. The codes at BTNImport_Click event handles the uploading of CSV file to the specified location and at the same time this is where the CreateDatabaseTable() and LoadDataToDatabase() are being called. If you notice I also added some basic trappings and validations within that event. Now to test the importing utility then let's create a simple data in a CSV format. Just for the simplicity of this demo let's create a CSV file and name it as "Employee" and add some data on it. Here's an example below: 1,VMS,Durano,[email protected] 2,Jennifer,Cortes,[email protected] 3,Xhaiden,Durano,[email protected] 4,Angel,Santos,[email protected] 5,Kier,Binks,[email protected] 6,Erika,Bird,[email protected] 7,Vianne,Durano,[email protected] 8,Lilibeth,Tree,[email protected] 9,Bon,Bolger,[email protected] 10,Brian,Jones,[email protected] Now save the newly created CSV file in some location in your hard drive. Okay let's run the application and browse the CSV file that we have just created. Take a look at the sample screen shots below: After browsing the CSV file. After clicking the Import Button Now if we look at the database that we have created earlier you'll notice that the Employee table is created with the imported data on it. See below screen shot.   That's it! I hope someone find this post useful! Technorati Tags: ASP.NET,CSV,SQL,C#,ADO.NET

    Read the article

  • Building The Right SharePoint Team For Your Organization

    - by Mark Rackley
    I see the question posted fairly often asking what kind SharePoint team an organization should have. How many people do I need? What roles do I need to fill? What is best for my organization? Well, just like every other answer in SharePoint, the correct answer is “it depends”. Do you ever get sick of hearing that??? I know I do… So, let me give you my thoughts and opinions based upon my experience and what I’ve seen and let you come to your own conclusions. What are the possible SharePoint roles? I guess the first thing you need to understand are the different roles that exist in SharePoint (and their are LOTS). Remember, SharePoint is a massive beast and you will NOT find one person who can do it all. If you are hoping to find that person you will be sorely disappointed. For the most part this is true in SharePoint 2007 and 2010. However, generally things are improved in 2010 and easier for junior individuals to grasp. SharePoint Administrator The absolutely positively only role that you should not be without no matter the size of your organization or SharePoint deployment is a SharePoint administrator. These guys are essential to keeping things running and figuring out what’s wrong when things aren’t running well. These unsung heroes do more before 10 am than I do all day. The bad thing is, when these guys are awesome, you don’t even know they exist because everything is running so smoothly. You should definitely invest some time and money here to make sure you have some competent if not rockstar help. You need an admin who truly loves SharePoint and will go that extra mile when necessary. Let me give you a real world example of what I’m talking about: We have a rockstar admin… and I’m sure she’s sick of my throwing her name around so she’ll just have to live with remaining anonymous in this post… sorry Lori… Anyway! A couple of weeks ago our Server teams came to us and said Hi Lori, I’m finalizing the MOSS servers and doing updates that require a restart; can I restart them? Seems like a harmless request from your server team does it not? Sure, go ahead and apply the patches and reboot during our scheduled maintenance window. No problem? right? Sounded fair to me… but no…. not to our fearless SharePoint admin… I need a complete list of patches that will be applied. There is an update that is out there that will break SharePoint… KB973917 is the patch that has been shown to cause issues. What? You mean Microsoft released a patch that would actually adversely affect SharePoint? If we did NOT have a rockstar admin, our server team would have applied these patches and then when some problem occurred in SharePoint we’d have to go through the fun task of tracking down exactly what caused the issue and resolve it. How much time would that have taken? If you have a junior SharePoint admin or an admin who’s not out there staying on top of what’s going on you could have spent days tracking down something so simple as applying a patch you should not have applied. I will even go as far to say the only SharePoint rockstar you NEED in your organization is a SharePoint admin. You can always outsource really complicated development projects or bring in a rockstar contractor every now and then to make sure you aren’t way off track in other areas. For your day-to-day sanity and to keep SharePoint running smoothly, you need an awesome Admin. Some rockstars in this category are: Ben Curry, Mike Watson, Joel Oleson, Todd Klindt, Shane Young, John Ferringer, Sean McDonough, and of course Lori Gowin. SharePoint Developer Another essential role for your SharePoint deployment is a SharePoint developer. Things do start to get a little hazy here and there are many flavors of “developers”. Are you writing custom code? using SharePoint Designer? What about SharePoint Branding?  Are all of these considered developers? I would say yes. Are they interchangeable? I’d say no. Development in SharePoint is such a large beast in itself. I would say that it’s not so large that you can’t know it all well, but it is so large that there are many people who specialize in one particular category. If you are lucky enough to have someone on staff who knows it all well, you better make sure they are well taken care of because those guys are ready-made to move over to a consulting role and charge you 3 times what you are probably paying them. :) Some of the all-around rockstars are Eric Shupps, Andrew Connell (go Razorbacks), Rob Foster, Paul Schaeflein, and Todd Bleeker SharePoint Power User/No-Code Solutions Developer These SharePoint Swiss Army Knives are essential for quick wins in your organization. These people can twist the out-of-the-box functionality to make it do things you would not even imagine. Give these guys SharePoint Designer, jQuery, InfoPath, and a little time and they will create views, dashboards, and KPI’s that will blow your mind away and give your execs the “wow” they are looking for. Not only can they deliver that wow factor, but they can mashup, merge, and really help make your SharePoint application usable and deliver an overall better user experience. Before you hand off a project to your SharePoint Custom Code developer, let one of these rockstars look at it and show you what they can do (in probably less time). I would say the second most important role you can fill in your organization is one of these guys. Rockstars in this category are Christina Wheeler, Laura Rogers, Jennifer Mason, and Mark Miller SharePoint Developer – Custom Code If you want to really integrate SharePoint into your legacy systems, or really twist it and make it bend to your will, you are going to have to open up Visual Studio and write some custom code.  Remember, SharePoint is essentially just a big, huge, ginormous .NET application, so you CAN write code to make it do ANYTHING, but do you really want to spend the time and effort to do so? At some point with every other form of SharePoint development you are going to run into SOME limitation (SPD Workflows is the big one that comes to mind). If you truly want to knock down all the walls then custom development is the way to go. PLEASE keep in mind when you are looking for a custom code developer that a .NET developer does NOT equal a SharePoint developer. Just SOME of the things these guys write are: Custom Workflows Custom Web Parts Web Service functionality Import data from legacy systems Export data to legacy systems Custom Actions Event Receivers Service Applications (2010) These guys are also the ones generally responsible for packaging everything up into solution packages (you are doing that, right?). Rockstars in this category are Phil Wicklund, Christina Wheeler, Geoff Varosky, and Brian Jackett. SharePoint Branding “But it LOOKS like SharePoint!” Somebody call the WAAAAAAAAAAAAHMbulance…   Themes, Master Pages, Page Layouts, Zones, and over 2000 styles in CSS.. these guys not only have to be comfortable with all of SharePoint’s quirks and pain points when branding, but they have to know it TWICE for publishing and non-publishing sites.  Not only that, but these guys really need to have an eye for graphic design and be able to translate the ramblings of business into something visually stunning. They also have to be comfortable with XSLT, XML, and be able to hand off what they do to your custom developers for them to package as solutions (which you are doing, right?). These rockstars include Heater Waterman, Cathy Dew, and Marcy Kellar SharePoint Architect SharePoint Architects are generally SharePoint Admins or Developers who have moved into more of a BA role? Is that fair to say? These guys really have a grasp and understanding for what SharePoint IS and what it can do. These guys help you structure your farms to meet your needs and help you design your applications the correct way. It’s always a good idea to bring in a rockstar SharePoint Architect to do a sanity check and make sure you aren’t doing anything stupid.  Most organizations probably do not have a rockstar architect on staff. These guys are generally brought in at the deployment of a farm, upgrade of a farm, or for large development projects. I personally also find architects very useful for sitting down with the business to translate their needs into what SharePoint can do. A good architect will be able to pick out what can be done out-of-the-box and what has to be custom built and hand those requirements to the development Staff. Architects can generally fill in as an admin or a developer when needed. Some rockstar architects are Rick Taylor, Dan Usher, Bill English, Spence Harbar, Neil Hodgkins, Eric Harlan, and Bjørn Furuknap. Other Roles / Specialties On top of all these other roles you also get these people who specialize in things like Reporting, BDC (BCS in 2010), Search, Performance, Security, Project Management, etc... etc... etc... Again, most organizations will not have one of these gurus on staff, they’ll just pay out the nose for them when they need them. :) SharePoint End User Everyone else in your organization that touches SharePoint falls into this category. What they actually DO in SharePoint is determined by your governance and what permissions you give these guys. Hopefully you have these guys on a fairly short leash and are NOT giving them access to tools like SharePoint Designer. Sadly end users are the ones who truly make your deployment a success by using it, but are also your biggest enemy in breaking it.  :)  We love you guys… really!!! Okay, all that’s fine and dandy, but what should MY SharePoint team look like? It depends! Okay… Are you just doing out of the box team sites with no custom development? Then you are probably fine with a great Admin team and a great No-Code Solution Development team. How many people do you need? Depends on how busy you can keep them. Sorry, can’t answer the question about numbers without knowing your specific needs. I can just tell you who you MIGHT need and what they will do for you. I’ll leave you with what my ideal SharePoint Team would look like for a particular scenario: Farm / Organization Structure Dev, QA, and 2 Production Farms. 5000 – 10000 Users Custom Development and Integration with legacy systems Team Sites, My Sites, Intranet, Document libraries and overall company collaboration Team Rockstar SharePoint Administrator 2-3 junior SharePoint Administrators SharePoint Architect / Lead Developer 2 Power User / No-Code Solution Developers 2-3 Custom Code developers Branding expert With a team of that size and skill set, they should be able to keep a substantial SharePoint deployment running smoothly and meet your business needs. This does NOT mean that you would not need to bring in contract help from time to time when you need an uber specialist in one area. Also, this team assumes there will be ongoing development for the life of your SharePoint farm. If you are just going to be doing sporadic custom development, it might make sense to partner with an awesome firm that specializes in that sort of work (I can give you the name of a couple if you are interested).  Again though, the size of your team depends on the number of requests you are receiving and how much active deployment you are doing. So, don’t bring in a team that looks like this and then yell at me because they are sitting around with nothing to do or are so overwhelmed that nothing is getting done. I do URGE you to take the proper time to asses your needs and determine what team is BEST for your organization. Also, PLEASE PLEASE PLEASE do not skimp on the talent. When it comes to SharePoint you really do get what you pay for when it comes to employees, contractors, and software.  SharePoint can become absolutely critical to your business and because you skimped on hiring a developer he created a web part that brings down the farm because he doesn’t know what he’s doing, or you hire an admin who thinks it’s fine to stick everything in the same Content Database and then can’t figure out why people are complaining. SharePoint can be an enormous blessing to an organization or it’s biggest curse. Spend the time and money to do it right, or be prepared to spending even more time and money later to fix it.

    Read the article

  • Reading input from a text file, omits the first and adds a nonsense value to the end?

    - by Greenhouse Gases
    Hi there When I input locations from a txt file I am getting a peculiar error where it seems to miss off the first entry, yet add a garbage entry to the end of the link list (it is designed to take the name, latitude and longitude for each location you will notice). I imagine this to be an issue with where it starts collecting the inputs and where it stops but I cant find the error!! It reads the first line correctly but then skips to the next before adding it because during testing for the bug it had no record of the first location Lisbon though whilst stepping into the method call it was reading it. Very bizarre but hopefully someone knows the issue. Here is firstly my header file: #include <string> struct locationNode { char nodeCityName [35]; double nodeLati; double nodeLongi; locationNode* Next; void CorrectCase() // Correct upper and lower case letters of input { int MAX_SIZE = 35; int firstLetVal = this->nodeCityName[0], letVal; int n = 1; // variable for name index from second letter onwards if((this->nodeCityName[0] >90) && (this->nodeCityName[0] < 123)) // First letter is lower case { firstLetVal = firstLetVal - 32; // Capitalise first letter this->nodeCityName[0] = firstLetVal; } while(n <= MAX_SIZE - 1) { if((this->nodeCityName[n] >= 65) && (this->nodeCityName[n] <= 90)) { letVal = this->nodeCityName[n] + 32; this->nodeCityName[n] = letVal; } n++; } //cityNameInput = this->nodeCityName; } }; class Locations { private: int size; public: Locations(){ }; // constructor for the class locationNode* Head; //int Add(locationNode* Item); }; And here is the file containing main: // U08221.cpp : main project file. #include "stdafx.h" #include "Locations.h" #include <iostream> #include <string> #include <fstream> using namespace std; int n = 0,x, locationCount = 0, MAX_SIZE = 35; string cityNameInput; char targetCity[35]; bool acceptedInput = false, userInputReq = true, match = false, nodeExists = false;// note: addLocation(), set to true to enable user input as opposed to txt file locationNode *start_ptr = NULL; // pointer to first entry in the list locationNode *temp, *temp2; // Part is a pointer to a new locationNode we can assign changing value followed by a call to Add locationNode *seek, *bridge; void setElementsNull(char cityParam[]) { int y=0, count =0; while(cityParam[y] != NULL) { y++; } while(y < MAX_SIZE) { cityParam[y] = NULL; y++; } } void addLocation() { temp = new locationNode; // declare the space for a pointer item and assign a temporary pointer to it if(!userInputReq) // bool that determines whether user input is required in adding the node to the list { cout << endl << "Enter the name of the location: "; cin >> temp->nodeCityName; temp->CorrectCase(); setElementsNull(temp->nodeCityName); cout << endl << "Please enter the latitude value for this location: "; cin >> temp->nodeLati; cout << endl << "Please enter the longitude value for this location: "; cin >> temp->nodeLongi; cout << endl; } temp->Next = NULL; //set to NULL as when one is added it is currently the last in the list and so can not point to the next if(start_ptr == NULL){ // if list is currently empty, start_ptr will point to this node start_ptr = temp; } else { temp2 = start_ptr; // We know this is not NULL - list not empty! while (temp2->Next != NULL) { temp2 = temp2->Next; // Move to next link in chain until reach end of list } temp2->Next = temp; } ++locationCount; // increment counter for number of records in list if(!userInputReq){ cout << "Location sucessfully added to the database! There are " << locationCount << " location(s) stored" << endl; } } void populateList(){ ifstream inputFile; inputFile.open ("locations.txt", ios::in); userInputReq = true; temp = new locationNode; // declare the space for a pointer item and assign a temporary pointer to it do { inputFile.get(temp->nodeCityName, 35, ' '); setElementsNull(temp->nodeCityName); inputFile >> temp->nodeLati; inputFile >> temp->nodeLongi; setElementsNull(temp->nodeCityName); if(temp->nodeCityName[0] == 10) //remove linefeed from input { for(int i = 0; temp->nodeCityName[i] != NULL; i++) { temp->nodeCityName[i] = temp->nodeCityName[i + 1]; } } addLocation(); } while(!inputFile.eof()); userInputReq = false; cout << "Successful!" << endl << "List contains: " << locationCount << " entries" << endl; cout << endl; inputFile.close(); } bool nodeExistTest(char targetCity[]) // see if entry is present in the database { match = false; seek = start_ptr; int letters = 0, letters2 = 0, x = 0, y = 0; while(targetCity[y] != NULL) { letters2++; y++; } while(x <= locationCount) // locationCount is number of entries currently in list { y=0, letters = 0; while(seek->nodeCityName[y] != NULL) // count letters in the current name { letters++; y++; } if(letters == letters2) // same amount of letters in the name { y = 0; while(y <= letters) // compare each letter against one another { if(targetCity[y] == seek->nodeCityName[y]) { match = true; y++; } else { match = false; y = letters + 1; // no match, terminate comparison } } } if(match) { x = locationCount + 1; //found match so terminate loop } else{ if(seek->Next != NULL) { bridge = seek; seek = seek->Next; x++; } else { x = locationCount + 1; // end of list so terminate loop } } } return match; } void deleteRecord() // complete this { int junction = 0; locationNode *place; cout << "Enter the name of the city you wish to remove" << endl; cin >> targetCity; setElementsNull(targetCity); if(nodeExistTest(targetCity)) //if this node does exist { if(seek == start_ptr) // if it is the first in the list { junction = 1; } if(seek != start_ptr && seek->Next == NULL) // if it is last in the list { junction = 2; } switch(junction) // will alter list accordingly dependant on where the searched for link is { case 1: start_ptr = start_ptr->Next; delete seek; --locationCount; break; case 2: place = seek; seek = bridge; delete place; --locationCount; break; default: bridge->Next = seek->Next; delete seek; --locationCount; break; } } else { cout << targetCity << "That entry does not currently exist" << endl << endl << endl; } } void searchDatabase() { char choice; cout << "Enter search term..." << endl; cin >> targetCity; if(nodeExistTest(targetCity)) { cout << "Entry: " << endl << endl; } else { cout << "Sorry, that city is not currently present in the list." << endl << "Would you like to add this city now Y/N?" << endl; cin >> choice; /*while(choice != ('Y' || 'N')) { cout << "Please enter a valid choice..." << endl; cin >> choice; }*/ switch(choice) { case 'Y': addLocation(); break; case 'N': break; default : cout << "Invalid choice" << endl; break; } } } void printDatabase() { temp = start_ptr; // set temp to the start of the list do { if (temp == NULL) { cout << "You have reached the end of the database" << endl; } else { // Display details for what temp points to at that stage cout << "Location : " << temp->nodeCityName << endl; cout << "Latitude : " << temp->nodeLati << endl; cout << "Longitude : " << temp->nodeLongi << endl; cout << endl; // Move on to next locationNode if one exists temp = temp->Next; } } while (temp != NULL); } void nameValidation(string name) { n = 0; // start from first letter x = name.size(); while(!acceptedInput) { if((name[n] >= 65) && (name[n] <= 122)) // is in the range of letters { while(n <= x - 1) { while((name[n] >=91) && (name[n] <=97)) // ERROR!! { cout << "Please enter a valid city name" << endl; cin >> name; } n++; } } else { cout << "Please enter a valid city name" << endl; cin >> name; } if(n <= x - 1) { acceptedInput = true; } } cityNameInput = name; } int main(array<System::String ^> ^args) { //main contains test calls to functions at present cout << "Populating list..."; populateList(); printDatabase(); deleteRecord(); printDatabase(); cin >> cityNameInput; } The text file contains this (ignore the names, they are just for testing!!): Lisbon 45 47 Fattah 45 47 Darius 42 49 Peter 45 27 Sarah 85 97 Michelle 45 47 John 25 67 Colin 35 87 Shiron 40 57 George 34 45 Sean 22 33 The output omits Lisbon, but adds on a garbage entry with nonsense values. Any ideas why? Thank you in advance.

    Read the article

< Previous Page | 3 4 5 6 7