Author: Mohamed Sakr

Join Lync Conference Using Lync 2013 SDK


This article explains how to prepare for and join online meetings using Microsoft Lync 2013 SDK, you can join a scheduled online meeting with colleagues and clients from down the hall or across the world without leaving your desk.

For online meetings with audio or video calls You can access your scheduled conference invite using Microsoft Lync 2013 SDK.

When a user schedules an Online meeting in Outlook 2007/2010 the meeting and Lync information associated with that meeting are stored in a number of MAPI properties on the Appointment Item.

You can use these information further to do many tasks i.e. to join a conference meeting, also you could use these information to put it in a calendar.

The following demonstrations run on different computers. These computers is used to start a conference by using Microsoft Lync 2013 SDK UI Automation mode or Microsoft Lync 2013 SDK UI Suppression mode.

Getting the Online Meeting URL

You could get the online meeting URLs by the Managed EWS API, Microsoft encourage Microsoft .NET Framework developers to use the EWS Managed API instead of auto-generated proxies to develop against Exchange Web Services. The EWS Managed API object model is significantly easier to use than auto-generated proxy object models.

You get more information how to use EWS in the previous post Get Lync Online Meetings’ Information using the EWS Managed API

Getting the Conference URI

The conversation properties that you use to form a complete conference URI are null at the time that the conversation is created.

To get the conference URI properties, you register for the Conversation.PropertyChanged event on the conversation to get the URI value

The following are examples of how to build a conference join URL using properties of the conversation.

UI Automation Mode

private void Conversation_PropertyChanged
    (object sender, ConversationPropertyChangedEventArgs e)
{
    //Getting meeting url
    if (e.Property == ConversationProperty.ConferencingUri)
    { 
        string ConferenceUri = "conf:" 
           + ((Conversation)sender).Properties[ConversationProperty.ConferencingUri]
           + "?" + ((Conversation)sender).Properties[ConversationProperty.Id];
    }
}

UI Suppression Mode

void ConversationManager_ConversationAdded
    (object sender, ConversationManagerEventArgs e)
{
    //Getting meeting url
    string url = string.Empty;
    string domain = string.Empty;
    string name = string.Empty;
    string id = string.Empty;

    url = ConferenceUrl.Replace("https://meet.","");
    url = url.Substring(0, url.Length);
    domain = url.Substring(0, url.IndexOf("/"));
    url = url.Replace(domain + "/", "");
    name = url.Substring(0, url.IndexOf("/"));
    id = url.Replace(name + "/", "");
    url = "conf:sip:" + name + "@" + domain + 
          ";gruu;opaque=app:conf:focus:id:" + id + "?" +
          _activeConversation.Properties[ConversationProperty.Id].ToString();
}

Joining the Conference

Once the conference URI is obtained by another user, use the BeginStartConversation method to join a conference in a Microsoft Lync 2010 SDK application. The string argument of the call is the conference URI.

UI Automation Mode

IAsyncResult ar = LyncClient.GetAutomation().BeginStartConversation(
    this.ConferenceUriObtainedFromEmail_string,
    this.Handle.ToInt32(),
    null,
    null);
_Automation.EndStartConversation(ar);

UI Suppression Mode

void ConversationManager_ConversationAdded
    (object sender, ConversationManagerEventArgs e)
{
    //Join the conference
    _activeConversation.ConversationManager.JoinConference(url);
}

Get Lync Online Meetings’ Information using the EWS Managed API


In this article we will learn how to be able to use Exchange Web Services (EWS) via the EWS Managed API to access an Exchange Mailbox calendar and view all information for the Online Lync meetings.

The examples are written to work with the Exchange Online and Lync Online from the Office365 suite and within a private cloud or on-premise deployment of Lync 2010 and Exchange 2007 or Exchange 2010.

When a user schedules an Online meeting in Outlook 2007/2010 the meeting and Lync information associated with that meeting are stored in a number of MAPI properties on the Appointment Item.

You can use these information further to do many tasks i.e. to join a conference meeting using the browser or in any other way like the Lync SDK, also you could use these information to put it in a calendar.

EWS Managed API

The Microsoft Exchange Web Services (EWS) Managed API provides an intuitive interface for developing client applications that use Exchange Web Services. The EWS Managed API provides unified access to Microsoft Exchange Server resources, while using Microsoft Outlook–compatible business logic. The EWS Managed API uses EWS SOAP messages to communicate with the Exchange Client Access server.

Microsoft encourage Microsoft .NET Framework developers to use the EWS Managed API instead of auto generated proxies to develop against Exchange Web Services. The EWS Managed API object model is significantly easier to use than autogenerated proxy object models.

Prerequisite

  • Microsoft Visual Studio
  • C# Language
  • EWS Managed API
  • Exchange Server with known credential

Sample Code

Add reference to the Microsoft.Exchange.WebServices

using Microsoft.Exchange.WebServices.Data;

Connect to Exchange Web Services as user1 at YourDomain.com and select the version of the exchange you are using

var service = new ExchangeService(ExchangeVersion.Exchange2010_SP1);
service.UseDefaultCredentials = true;
service.AutodiscoverUrl("user1@YourDomain.com");

Select the folder that you want to get its items. In this case should select the calendar to get the online meetings

var calendarFolder = new FolderId(WellKnownFolderName.Calendar);

Choose the appropriate period to get its meetings and the number of items you want to get

var calendarView = new CalendarView(DateTime.Now, DateTime.Now.AddMonths(1), 1000);
var UCOpenedConferenceID = 
    new ExtendedPropertyDefinition(DefaultExtendedPropertySet.PublicStrings,
                                  "UCOpenedConferenceID",
                                   MapiPropertyType.String);
var OnlineMeetingExternalLink =
    new ExtendedPropertyDefinition(DefaultExtendedPropertySet.PublicStrings,
                                  "OnlineMeetingExternalLink",
                                   MapiPropertyType.String);

Start getting the items of the folder

PropertySet iDPropertySet =
    new PropertySet(BasePropertySet.IdOnly) {UCOpenedConferenceID};
calendarView.PropertySet = iDPropertySet;
var lyncMeetings = new List();
var appoResult = service.FindAppointments(calendarFolder, calendarView);
foreach (var appointment in appoResult)
{
    object UCconfId = null;
    if(appointment.TryGetProperty(UCOpenedConferenceID, out UCconfId))
        lyncMeetings.Add(appointment);
}

Start getting the details of each item

var detailPropertySet =
new PropertySet(BasePropertySet.FirstClassProperties) { OnlineMeetingExternalLink };
service.LoadPropertiesForItems(lyncMeetings, detailPropertySet);
foreach (Appointment appointment in lyncMeetings)
{
    object lyncMeetingUrl = null;
    appointment.TryGetProperty(OnlineMeetingExternalLink, out lyncMeetingUrl);
}

Download EWS API

Kinect Hardware


The Kinect for Windows SDK takes advantage of and is dependent upon the specialized components included in all planned versions of the Kinect device. In order to understand the capabilities of the SDK, it is important to first understand the hardware it talks to.

The glossy black case for the Kinect components includes a head as well as a base, as shown in the following Figure

Kinect Device

The head is 12 inches by 2.5 inches by 1.5 inches. The attachment between the base and the head is motorized. The case hides an infrared projector, two cameras, four microphones, and a fan.

There is not recommend ever removing the Kinect case. In order to show the internal components. However some body did. On the front of Kinect, from left to right respectively when facing Kinect, you will find the sensors and light source that are used to capture RGB and depth data. To the far left is the infrared light source. Next to this is the LED ready indicator. Next is the color camera used to collect RGB data, and finally,
on the right (toward the center of the Kinect head), is the infrared camera used to capture depth data. The color camera supports a maximum resolution of 1280 x 960 while the depth camera supports a maximum resolution of 640 x 480.

On the underside of Kinect is the microphone array. The microphone array is composed of four different microphones. One is located to the left of the infrared light source. The other three are evenly spaced to the right of the depth camera.

If you bought a Kinect sensor without an Xbox bundle, the Kinect comes with a Y-cable, which extends the USB connector wire on Kinect as well as providing additional power to Kinect. The USB extender is required because the male connector that comes off of Kinect is not a standard USB connector. The additional power is required to run the motors on the Kinect.

If you buy a new Xbox bundled with Kinect, you will likely not have a Y-cable included with your purchase. This is because the newer Xbox consoles have a proprietary female USB connector that works with Kinect as is and does not require additional power for the Kinect servos. This is a problem—and a source of enormous confusion—if you intend to use Kinect for PC development with the Kinect SDK. You will need to purchase the Y-cable separately if you did not get it with your Kinect. It is typically marketed as a Kinect AC Adapter or Kinect Power Source.
Software built using the Kinect SDK will not work without it.

A final piece of interesting Kinect hardware sold by Nyco rather than by Microsoft is called the Kinect Zoom. The base Kinect hardware performs depth recognition between 0.8 and 4 meters. The Kinect Zoom is a set of lenses that fit over Kinect, allowing the Kinect sensor to be used in rooms smaller than the standard dimensions Microsoft recommends. It is particularly appealing for users of the Kinect SDK who might want to use it for specialized functionality such as custom finger tracking logic or productivity tool implementations involving a person sitting down in front of Kinect. From experimentation, it actually turns out to not be very good for playing games, perhaps due to the quality of the lenses.

Hardware Requirements:

– Computer with a dual-core, 2.66-GHz or faster processor
– Windows 7–compatible graphics card that supports Microsoft DirectX 9.0c capabilities
– 2 GB of RAM (4 GB or RAM recommended)
– Kinect for Xbox 360 sensor
– Kinect USB power adapter

A Multitouch Projector You Can Wear From Microsoft


Microsoft Research is on a bit of a roll lately with their future-tech demonstrations. At the end of last month they showed off a Holoflector augmented reality mirror, a physical object sharing projector called Illumishare, and an interactive transparent 3D desktop using Samsung’s transparent OLED.

This week Microsoft has revealed another device for the future, one which looks to be an extension of Carnegie Mellon’s HCI Institute Omnitouch project. What Microsoft have done is to clip a Kinect motion controller and pico projector together, and mount them on your shoulder. The combination of devices produces a projection on any given surface that the user can interact with just like a touchscreen.

Obviously the parts needs to be miniaturized, but this wearable multitouch projector could one day replace the need to actually carry a phone or tablet. Instead, you’d just clip a small device to a shirt pocket or jacket, and project your screen when you need it.

The projector doesn’t just use the Kinect to capture input though, it also helps determine the size of the surface being worked on. If it’s a wall, you may get a 10-inch projection, but if you hold a small notebook up, the image is adjusted to fit within its bounds. That’s both clever and useful if you want what you’re doing to remain a little more private.

And what’s the other benefit of using Kinect? It allows for gestures, so for certain actions you may not even need a display. For example, make a “call someone” gesture, say the name you want to call, and the person’s phone rings.

There’s no plans to bring this to market any time soon, but there’s a lot of potential for this setup to become a future replacement for today’s phones and tablets. It’s also another example of the diversity of Kinect, and the potential it has to form the core of many future Microsoft hardware devices.

How Kinect for Windows Works


Unveiling a new Kinect device specifically for Windows was a surprise. Developers have already been working with an official Microsoft beta SDK for Xbox Kinect units for noncommercial use on Windows machines since June, and unofficially using community-developed open-source drivers long before that.

The new Kinect for Windows devices cost more: $250 against the $100-150 retail for the current Xbox Kinect devices. Kinect for Windows general manager Craig Eisler says that the cost difference is mostly because on Xbox, Kinect is “subsidized by consumers buying a number of Kinect games, subscribing to Xbox Live, and making other transactions associated with the Xbox 360 ecosystem.” Hence the bump—although later this year, Microsoft says it will make Kinect for Windows available to students, educators, schools, libraries and museums for $150, the same price as Kinect for Xbox.

Besides just reading “KINECT” in lieu of “XBOX 360,” Kinect for Windows devices also have different firmware and other features from their Xbox cousins. While Kinect for Xbox was designed to recognize whole bodies from across a room, Kinect for Windows has something called “Near Mode,” allowing its camera “to see objects as close as 50 centimeters in front of the device without losing accuracy or precision, with graceful degradation down to 40 centimeters,” according to Microsoft.

The idea is that commercial developers—big companies you know, like Google, Adobe, Electronic Arts, Autodesk, as well as more obscure companies developing specialized applications for medicine or education—will build applications using voice or gesture recognition specifically for the desktop PC, portable laptops and tablets, or other Windows implementations besides the living room. Used in those contexts, near-range sensitivity matters much more than recognition at a distance.

Kinect then becomes a general-purpose NUI (natural user interface) interface for the PC, where “PC” is broadly construed for the post-Wintel era. Windows 8′s Metro interface is already optimized for touchscreens and touchpads; Kinect turbocharges Windows’ voice capture and adds full-motion gesture and facial recognition to the mix. (The only thing it’s missing—so far—is the ability to track eye movements.)

The Kinect for Windows unit also offers a modified USB connector and better protection against noise and interference. Both tweaks are designed to better incorporate the Kinect hardware to the PC environment—even if the basic hardware looks identical to the original.

At its limit, you could imagine Kinect sensors in other form factors: some designed for portable use, like a handheld souped-up Wiimote, others integrated into all-in-one PCs the way that webcams are now. Microsoft had nothing like this to announce, but SuperSite for Windows blogger Paul Thurrott wondered about it out loud during his keynote livechat with ZDNet’s Mary Jo Foley.

Microsoft’s been talking about expanding the use of natural user interfaces in computing for years, even delivering innovative products like the giant multitouch-powered Surface and incorporating better touch and speech recognition into plain-vanilla Windows. Besides Kinect, though, it’s mostly been an R&D-driven future-of-computing hobby.

Even the phrase “natural user interface” still clings clumsily to Steve Ballmer’s tongue. He can’t communicate enthusiasm for the possibilities of NUIs like Bill Gates is able to—astonishing, considering that Ballmer can fire himself up into an almost-awkwardly over-the-top giddiness about almost anything else that Microsoft does.

Ballmer never thought he’d be in this position—not only porting a gaming peripheral to his beloved Windows machines, or even opening it up for commercial development by other software companies, but owning it, taking control of it, and positioning it as a key component in the future of the company.

Considering that a little over a year ago, Microsoft was threatening to sue and/or prosecute anyone who wanted to develop for Kinect on a PC, it’s a remarkable turnaround.

It’s also remarkable that a company that became a giant by selling its software to consumers and hardware partners is now effectively giving its software away for free—and making its money back by selling its own branded hardware.

The commercial development kit and licenses Microsoft has put together to build Kinect for Windows doesn’t follow the Open Kinect model. Instead, it offers something much more controlled. Developers can’t use open drivers or the cheaper Xbox Kinect for commercial projects. Plus, as the moniker “Kinect for Windows” suggests, they’re required to use it on machines running Windows 7 or 8. Finally, even noncommercial projects—still officially permitted on the Xbox Kinect devices—aren’t licensed to use software other than Microsoft’s official commercial SDK to write code for the Kinect for Windows hardware.

“They were smart to adopt what we were doing and turn it into a business for themselves,” Torrone said of Microsoft. They built the Kinect Accelerator to seed projects. They featured ones they liked on their website, rebranded the widespread adoption of the device “The Kinect Effect.”

“It got away from them for a moment, but they adapted themselves to it and took a leadership position. They had to.”

“VC-40” Previously Named “LyncKin” the Great Idea for A Business Solution


LyncKin is a business-oriented, video-conferencing application aimed at cutting costs of video conferencing, and increasing its productivity. It brings the power of Kinect sensor to control Lync video conferencing.

Imagine what can come out of bringing two giants together to form one solution to match the increasing demands of the business market. As a Microsoft partner, EgyptNetwork were one of the early respondents to Microsoft’s call “Be Part of The Movement” to develop Kinect SDK.

Video conferencing in Lync creates more personal experience that helps people get to know each other better and communicate more effectively. On the other hand, Kinect serves as the media broadcaster integrating video and audio capabilities to run side by side with Lync functionality. LyncKin has a user interface that features Kinect capturing the user’s gestures to control the application remotely to easily select the command that they want to go with. This controlling ability of Kinect makes things easier for the video meeting attendees to have their influence on the video conference in a way that serves the business market needs.


LyncKin takes Kinect from the gaming fantasy world into the business reality by using Kinect sensing technologies to let the user – business person – control Lync video conferencing by using body gesturing and speaking. This controlling ability of Kinect makes things easier for the video conference attendees.

With LyncKin, the user is being the conference  controller, they can perform some activities from their place; by using arm-waving gestures at the camera, and voice commands to perform functions where Kinect sensors detects both movements and voice in very sophisticated ways.

It can help to improve how people interact with co-workers, customers and partners through a more personalized collaboration experience. LyncKin is an optimized conferencing solution that can build voice and video collaboration for Microsoft Lync environments.

User Experience:

LyncKin provides a way to use the natural-user interface capabilities of Kinect in business settings.

A rich user experience and a unified interface make it easy for people to work together effectively and frequently even when time or distance prevents in-person meetings.

LyncKin enables businesses to conduct a video conference while many of attendees are scattered in different places, may be in different countries. Users can also use their body motion capabilities to investigate some shared contents.

Features:

During conferences, LyncKin helps users to control the meeting from their places, as it follows:

  • Hands motions can Investigate Lync contacts, select someone to call, start a video conference with them, easily navigate shared meeting contents, and end the call
  • Voice commands can hold, end call, or recognize new meeting attendee.
  • Face recognition for a new meeting attendee and informing others who aren’t in the meeting room textually.

Business Benefits:

  • Add new capability to Lync unified communication tool for more effective collaboration.
  • Reduce cost of video conferencing by using Lync infrastructure.
  • Control Microsoft Lync without need of any additional peripherals.
  • Easily used on thin or rich clients.

You can See a video and download a beta version here

KINECT for Windows


Kinect for Windows consists of the Kinect for Windows hardware and Kinect for Windows SDK, which supports applications built with C++, C#, or Visual Basic by using Microsoft Visual Studio 2010. The newly release Kinect for Windows SDK version 1 offers improved skeletal tracking, enhanced speech recognition, modified API, and the ability to support up to four Kinect for Windows sensors plugged into one computer.

New in the 2012 SDK Release:

  • Commercial Ready:

Installer makes it easy to install Kinect for Windows runtime and driver components for end-user deployments.

  • Raw Sensor Streams:

Enables the depth sensor to see objects as close as 40 centimeters and also communicates more information about depth values outside the range than was previously available. There is also improved synchronization between color and depth, mapping depth to color, and a full frame API.

  • Skeletal Tracking:

Provides more accuracy and skeletal tracking now enables control over which user is being tracked by the sensor.

  • Advanced Speech and Audio Capabilities:

Provide the latest Microsoft Speech components and an updated English Language Pack for improved language recognition accuracy. In addition, the appropriate runtime components are now automatically installed with the runtime installer exe.

  • API Improvements:

Enhances consistency and ease of development. New developers should have a much easier time learning how to develop with Kinect for Windows, and all developers will be more productive.

Commercial Kinect for Windows Sensor:

The newly released Kinect for Windows hardware is optimized for use with computers and devices running Windows 7, Windows 8 developer preview (desktop applications only), and Windows Embedded-based devices. Some of the changes to the hardware include:

  • Near Mode:

Enables the camera to see objects as close as 40 centimeters in front of the device without losing accuracy or precision, with graceful degradation out to 3 meters.

  • Shortening USB cable and small cable:

Ensure reliability across a broad range of computers and improves coexistence with other USB peripherals.

  • Support and software update:

The Kinect for Windows hardware includes a one-year warranty, support, and access to software updates for both speech and human tracking.

  • Hardware Requirements:
  • 32-bit (x86) or 64-bit (x64) processor
  • Dual-core 2.66 GHz or faster processor
  • Dedicated USB 2.0 bus
  • 2 GB RAM
  • OS Requirements:

Requires Windows 7 or Windows Embedded Standard 7

The Kinect for Windows SDK beta can only be used with the Kinect for Xbox 360 hardware. Applications built with this hardware and software are for non-commercial development only. To accommodate existing non-commercial deployments using the SDK beta and the Kinect for Xbox 360 hardware, the beta license is being extended to June 16, 2016. Developers are encouraged to download the Kinect for Windows SDK, which was released February 1, 2012. This SDK provides additional features and updates.

Download the Kinect for Windows SDK beta

Source: Microsoft

Building Communications Applications With The Lync SDKs


The Lync 2010 SDK includes the Lync controls, a set of Silverlight and Windows Presentation Foundation (WPF) controls that you can use to integrate functionality found in the Lync client directly into your applications.

The SDK also includes the Lync application programming interface (API), a brand – new, managed API for building custom communications solutions. The Lync API is intended to replace the IMessenger and UCC APIs available with Office Communications Server 2007 R2. The IMessenger API was easy to get started with, but was fairly limited in functionality; it was also a little cumbersome to troubleshoot because it used COM interoperability to interact with the running instance of Communicator on the user’s machine.

The UCC API was very difficult to get started with in comparison, but it provided the most power and functionality if you wanted to build a Communicator replacement. Unlike the UCC API, the Lync API requires the Lync client to be running — it reuses the connection that the client has established with the Lync infrastructure. You can configure the Lync client to run in UI Suppression mode —where its user interface is invisible to the user — enabling you to build custom communications clients previously only possible when using the UCC API.

Lync Functionality – Using the Lync Controls in the Applications

Think of the Lync client as being built out of LEGO blocks, each providing a specific piece of functionality such as showing the presence of contacts, organizing contacts into groups, and interacting with contacts by starting instant message or phone conversations. The Lync controls separate the functionality in Lync clients into individual controls that developers can drag and drop into their Windows Presentation Foundation (WPF) or Silverlight applications.

The Lync controls include a control to show the presence of a contact; for example, the presence of a project manager in a CRM system. Controls are also available to easily start an instant message or audio conversation with that contact at the click of a button. With no additional code required.

A set of other controls provides functionality for managing contact lists; for example, to integrate the user’s Lync contact list into an application. You can also use custom contact lists to create and display an ad-hoc list of contacts, such as the account team for a client in a CRM application. Additional controls are available to search for contacts and display the results. Controls are also available to set the current user’s presence, personal note, and location.

Due to their obvious dependence on user interface elements of the Lync client, the Lync controls are not available in UI Suppression mode.

Integrating Lync functionality into applications using the Lync controls allows users to launch communications directly from the application that they are working in without needing to switch to the Lync client. The Lync controls are available in WPF and Silverlight and are extremely easy to use; you only need to drag and drop the appropriate controls into the application, and they work without the need for any additional code.

Communications – Using the Lync API in the Applications

The Lync API object model exposes extensibility points that allow developers to build applications that interact with the running instance of the Lync client. You can use the Lync API to programmatically sign a user into the Lync client and handle events for changes in its state. You can also start a conversation, add participants, handle conversation and participant events, and add contextual data to the conversation.

You can use the Lync API to create subscriptions on attributes of contacts in your contact list; for example, to track when the availability of a particular contact changes. The Lync API also provides functionality to modify attributes of users signed in to Lync, such as changing their presence or publishing a personal note or location.

Like the IMessenger API, the Lync API includes automation: the ability to start conversations in different modalities (such as instant message or audio/video) with a very small amount of code. The functionality in automation simply invokes the necessary Lync user interface elements, such as a Lync conversation that includes the Application Sharing modality so that a user can share her desktop with another user. Because it is dependent on Lync user interface elements, the functionality in automation is not available when the Lync client is running in UI Suppression mode.

In conjunction with the Lync controls, you can use the Lync API to easily add communications functionality into Silverlight, WPF, and Windows Forms applications. For example, you can spruce up a customer relationship management (CRM) application by integrating presence and click-to-call functionality, allowing users to accomplish their work without needing to switch back and forth between the application and the Lync client.

The Lync UI Suppression Mode

When the Lync client is configured to run in UI Suppression mode, its interface is completely hidden from the user. Applications that use Lync UI Suppression are responsible for recreating those user interface elements from scratch. The Lync API with Lync running in UI Suppression mode is the recommended development pattern for applications you would have previously built with the UCC API.

Lync UI Suppression requires that the Lync client is installed on the user’s machine; this eliminates the complexity of managing the connectivity of the application back to the Lync server infrastructure. In UI Suppression, you use the Lync API to replicate some of the functionality available in the Lync client, such as signing users into Lync, retrieving their contact list, and starting and responding to conversations in different modalities.

When working with UI Suppression, you interact with conversations at the modality level—activating individual modalities manually, creating conversations, adding participants, and disconnecting the modalities when the conversation is completed. For example, you can build a Silverlight instant messaging client that provides a completely customized user interface for instant message conversations. In this case, you would be responsible for recreating application functionality and user interface elements such as a contact list and conversation window. You would work directly with the instant message modality, creating a conversation, connecting the modality, sending instant message text to participants, notifying participants when someone is typing, and delivering the instant message text to the participants in the conversation.

Using the Lync API with Lync running in UI Suppression mode, you can build compelling Lync-replacement solutions such as a custom instant messaging client, or a dedicated audio/video conferencing solution.

Working with the UCMA

Although the Lync SDK is used to integrate communications functionality into applications that run on the client, UCMA is typically used to build communications applications that run on the server; for example, hosted in Internet Information Services (IIS), exposed through Windows Communication Foundation (WCF), or running in a Windows Service. A UCMA application is usually a long – running process such as an automatic call distributor used to handle and distribute incoming calls in a call center. Users interact with the UCMA application via an endpoint that can either be a contact in Lync, such as sip:HelpDesk@fabrikam.com, or simply a phone number. The user can start a Lync call, instant message with the UCMA application contact or dial the phone number associated with the application.

Consider the following scenario where Contoso, a fictitious company, uses a UCMA – based application to run its call center operations.

When customers call Contoso’s customer service phone number, the UCMA application picks up the calls and guides callers through a workflow, such as one built with the UCMA Workflow SDK, to gather information from them such as the reason for their call, their account number, and so on. After the workflow gathers the necessary information from the callers, it places them on hold and searches for an agent with the right skills to assist them. Customers remain on hold until an agent becomes available; the UCMA application tracks all the agents’ Lync presence so it knows when an agent becomes available again to handle a call.

When an agent picks up calls, he or she already knows a lot about the callers based on the information they provided. An Agent Dashboard application hosted in the Lync conversation window can display information about the caller such as order history or any open customer service tickets that require attention. The agent can use this information to provide better service to the customer.

An application such as the customer service Agent Dashboard is built using the Lync SDK, including the Lync controls and the Lync API. The UCMA application interacts with the Agent Dashboard using the Context Channel, a new feature in UCMA 3.0 that provides a channel across which a UCMA application and Lync SDK application can send information to each other. For example, if the agent realizes that he needs to consult another agent to help with the call, he can issue an “escalate” command from the Agent Dashboard application. The command is sent across the context channel to the UCMA application, which knows how to process it and look for another available agent with the necessary skills to assist with the call.

Part of a supervisor’s duties in Contoso’s customer service department is to monitor the performance of agents and coach them on how to provide better service to customers. The supervisor can launch a Supervisor Dashboard application that shows a list of all active calls. The supervisor selects a call to silently join, allowing him to monitor the call without the knowledge of either the customer or agent. The new audio routes functionality in UCMA 3.0 enables developers to build routes across which audio can travel in a conference, effectively controlling who can hear what. When the supervisor is monitoring a call, audio flows to her from the conference but doesn’t flow back in, allowing her to listen in to a call without being heard. If the supervisor needs to provide coaching to the customer service agent, an audio route is established from the supervisor to the agent, allowing her to “whisper” to the agent without the customer hearing any of the conversation.

UCMA 3.0 includes several other enhancements that are covered in more detail later in the book, including an easier development experience for working with presence and conferences, and a feature known as auto – provisioning, which greatly simplifies the process of managing the plumbing and configuration information required to run a UCMA application.

Building Workflow Applications with the UCMA Workflow SDK

You use the UCMA Workflow SDK to build communications – enabled workflow solutions such as IVR systems and virtual personal assistants. You typically use an IVR system to gather information from a caller such as the customer account number and reason for the call before connecting him or her to a live agent. A virtual personal assistant, on the other hand, provides services to the caller such as the ability to reserve a conference room from a mobile phone.

For a more concrete example, consider this scenario. In the legal industry, potential cases need be vetted for any conflicts of interest that could prevent the firm from being able to take on the case. This process is referred to as new matter intake, and each potential case is called a matter. Most law firms have software in place to streamline this process; however, such a solution can be extended to provide users with the ability to call in and check on the status of a new matter.

For example, an attorney could place a call to the New Matter Intake application contact in Microsoft Lync from her mobile phone. Using text – to – speech technology, the IVR prompts the attorney to enter her identification PIN and validates her identity. The IVR can then execute code to access the database, retrieve a list of outstanding matters for that attorney, and prompt her to select one. After the attorney selects a matter, the IVR can again access the database to identify the conflicts attorney assigned to the matter. The IVR can now check the presence of the conflicts attorney, and if he is available, ask the caller whether she wants to be transferred. The IVR can then perform a blind transfer of the call and disconnect itself from the call.

The UCMA 3.0 Workflow SDK enables developers to visually construct communications-enabled workflows by dragging workflow activities onto a design service, arranging and connecting them to form the workflow solution. You can construct workflows to accept audio or instant message calls, or both.

In the case of audio calls, input from the user can be in the form of dual-tone multi-frequency (DTMF) tones (choosing an option by entering its corresponding number using the phone’s keypad), speech recognition, or both. The text-to-speech engine, available in 26 different languages, converts text to prompts that the caller hears during different activities of the workflow. You can also substitute professionally recorded audio prompts to give the IVR a more polished feel.

The previous attorney example represents an incoming communications workflow; however, developers can also build outgoing communications workflows. For example, a person might receive an automated call from the Service Desk asking him to rate his experience with a ticket he recently opened. The communications workflow can ask him several questions, such as his satisfaction with how the ticket was handled, and then save the results of the survey to a database when the call is completed.

Workflows are a critical part of a communications solution, allowing the software to provide services to a caller and only transferring the call to a live customer service agent—the comparatively more expensive resource—if necessary and only after providing the agent with all the relevant information about the caller.

(Source: Professional Unified Communications Development with Microsoft Lync Server 2010 by George Durzi and Michael Greenlee)

Kinect out of the Xbox


Kinect for Xbox 360, or simply Kinect (originally known by the Code Name Project Natal),is a Motion Sensing Input Device by Microsoft  for the Xbox 360 Video Game Console. Based around a webcam style add-on peripheral for the Xbox 360 console, it enables users to control and interact with the Xbox 360 without the need to touch a game controller, through a Natural User Interface NUI using gestures and spoken commands. The project is aimed at broadening the Xbox 360’s audience beyond its typical gamer base. Kinect competes with the Wii Remote Plus and PlayStation Move with PlayStation Eye Motion Controllers for the Wii and PlayStation 3 home consoles, respectively.

After selling a total of 8 million units in its first 60 days, the Kinect holds the Guinness World Record of being the “fastest selling consumer electronics device”.

Microsoft released a non-commercial Kinect Software Development Kit SDK for Windows on June 16, 2011, with a commercial version following at a later date. This SDK will allow .Net developers to write Kinecting apps in C++/CLI, C# or VB.NET.


Kinect is based on software technology developed internally by Rare, a subsidiary of Microsoft Game Studios owned by Microsoft, and on range camera technology, which developed a system that can interpret specific gestures, making completely hands-free control of electronic devices possible by using an infrared projector and camera and a special microchip to track the movement of objects and individuals in three dimension. This 3D Scanner system called Light Coding employs a variant of image-based 3D reconstruction.

The Kinect sensor is a horizontal bar connected to a small base with a motorized pivot and is designed to be positioned lengthwise above or below the video display. The device features an “RGB camera, depth sensor and Multi-Array Microphone running proprietary software”, which provide full-body 3D Motion Capture, Facial Recognition and Voice Recognition capabilities. At launch, voice recognition was only made available in Japan, the United Kingdom, Canada and the United States. Mainland Europe will receive the feature in spring 2011. The Kinect sensor’s microphone array enables the Xbox 360 to conduct Acoustic Source Localization and Ambient Noise Suppression , allowing for things such as headset-free party chat over Xbox Live.

The depth sensor consists of an infrared laser projector combined with a monochrome CMOS Sensor , which captures video data in 3D under anyambient light conditions. The sensing range of the depth sensor is adjustable, and the Kinect software is capable of automatically calibrating the sensor based on gameplay and the player’s physical environment, accommodating for the presence of furniture or other obstacles.

Described by Microsoft personnel as the primary innovation of Kinect,the software technology enables advanced gesture recognition , facial recognition and voice recognition. According to information supplied to retailers, Kinect is capable of simultaneously tracking up to six people, including two active players for motion analysis with a feature extraction  of 20 joints per player. However, PrimeSense has stated that the number of people the device can “see” (but not process as players) is only limited by how many will fit in the field-of-view of the camera.

Microsoft Lync Server


Microsoft Lync Server (previously Microsoft Office Communications Server OCS and Microsoft Live Communication Server LCS) is an enterprise real-time communications server, providing the infrastructure for enterprise instant messaging, presence, file transfer, peer-to-peer and multiparty voice and video calling , ad-hoc and structured conferences (audio, video and web) and, through a 3rd party gateway or SIP trunk, PSTN connectivity. These features are available within an organization, between organizations, and with external users on the public internet or standard phones, on the PSTN as well as SIP trunking.

Versions History

  • 2013 – Microsoft Lync Server 2013
  • 2010 – Microsoft Lync Server 2010
  • 2009 – Office Communications Server 2007 R2
  • 2007 – Office Communications Server 2007
  • 2006 – Live Communications Server 2005 with SP1
  • 2005 – Live Communications Server 2005, codenamed Vienna
  • 2003 – Live Communications Server 2003

Client software and devices

Microsoft Lync is the primary client application released with Lync Server. This client is used for IM, presence, voice and video calls, desktop sharing, file transfer and ad hoc conferences. Microsoft also ships the Microsoft Attendant Console. This is a version of the Lync more oriented towards receptionists or delegates / secretaries or others who get a large volume of inbound calls.

Other client software and devices include:

  • Lync Communicator Mobile is a Mobile edition of the Lync Server 2010 client and designed to offer similar functionality including voice calls, instant messaging, presence and single number reachability. Clients for all major platforms including the IPhone are being developed
  • Lync Communicator Web Access is a web instant messaging and presence client. This version works as well on IE, Firefox and Opera browsers.
  • Microsoft RoundTable is an audio and video conferencing device that provides a 360-degree view of the conference room and tracks the various speakers. This device is now produced and sold via Polycom under the product name CX5000.
  • LG-Nortel and Polycom also make IP phones in a traditional phone form factor that operate an embedded edition of Office Communicator 2007. The physical plastic phones as referred by Microsoft are also named Tanjay Phones.

Features

One basic use of Lync Server is instant messaging and presence within a single organization. This includes support for rich presence information, file transfer, instant messaging as well as voice and video communication. (These latter features are often not possible even within a single organization using public IM clients, due to the effects of negotiating the corporate firewall and network address translation). Lync uses Interactive Connectivity Establishment for NAT traversal and TLS encryption to enable secure voice and video both inside and outside the corporate network.

Lync Server also supports remote users, both corporate users on the internet (e.g. mobile or home workers) as well as users in partner companies. Lync supports “federation” – enabling interoperability with other corporate IM networks. Federation can be configured either manually (where each partner manually configures the relevant edge servers in the other organization) or automatically (using the appropriate SRV records in the DNS).

Microsoft Lync Server uses Session Initiation Protocol (SIP) for signaling along with the SIMPLE extensions to SIP for IM and presence. Media is transferred using RTP/SRTP. The Live Meeting client uses PSOM to download meeting content. The Communicator client also uses HTTPS to connect with the web components server to download address books, expand distribution lists, etc. By default, Office Communications Server encrypts all signaling and media traffic using SIP over TLSand SRTP. There is one exception to this – traffic between the Mediation Server and a basic media gateway is carried as SIP over TCP and RTP. However, if a hybrid gateway is leveraged, such as one from Microsoft’s Open Interoperability Site, then in fact everything is encrypted from all points if (SSL certificates are configured on the gateway and TLS elected as the transmission type).

IM is only one portion of the Lync suite. The other major components are VOIP telephony and video conferencing through the desktop communicator client. Remote access is possible using mobile and web clients.

Several third parties have incorporated Lync functionality on existing platforms. HP has implemented OCS on their Halo video conferencing platform.

Microsoft released Microsoft Office Communications Server 2007 R2 in February 2009. The R2 release added the following features

  • Dial-in audio conferencing
  • Desktop sharing
  • Persistent Group Chat
  • Attendant console and delegation
  • Session Initiation Protocol trunking
  • Mobility and single-number reach